2.7. 查阅文档
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in SageMaker Studio Lab

由于篇幅限制,本书不可能介绍每一个MXNet函数和类。 API文档、其他教程和示例提供了本书之外的大量文档。 本节提供了一些查看MXNet API的指导。

由于篇幅限制,本书不可能介绍每一个PyTorch函数和类。 API文档、其他教程和示例提供了本书之外的大量文档。 本节提供了一些查看PyTorch API的指导。

2.7.1. 查找模块中的所有函数和类

为了知道模块中可以调用哪些函数和类,可以调用dir函数。 例如,我们可以查询随机数生成模块中的所有属性:

import mindspore

print(dir(mindspore))
['Accuracy', 'BackupAndRestore', 'BleuScore', 'COMPATIBLE', 'COOTensor', 'CSRTensor', 'Callback', 'CheckpointConfig', 'Cifar100ToMR', 'Cifar10ToMR', 'ConfusionMatrix', 'ConfusionMatrixMetric', 'ConvertModelUtils', 'ConvertNetUtils', 'CosineSimilarity', 'CsvToMR', 'DatasetHelper', 'Dice', 'DynamicLossScaleManager', 'EarlyStopping', 'EnvProfiler', 'Event', 'ExitByRequest', 'F1', 'FAILED', 'Fbeta', 'FileReader', 'FileWriter', 'FixedLossScaleManager', 'FlopsUtilizationCollector', 'GRAPH_MODE', 'Generator', 'HausdorffDistance', 'History', 'ImageNetToMR', 'Int', 'JitConfig', 'LAX', 'LambdaCallback', 'Layout', 'LearningRateScheduler', 'Loss', 'LossMonitor', 'LossScaleManager', 'MAE', 'MSE', 'MeanSurfaceDistance', 'Metric', 'MindPage', 'MnistToMR', 'Model', 'ModelCheckpoint', 'Node', 'NodeType', 'OcclusionSensitivity', 'OnRequestExit', 'PYNATIVE_MODE', 'ParallelMode', 'Parameter', 'ParameterTuple', 'Perplexity', 'Precision', 'Profiler', 'QuantDtype', 'ROC', 'Recall', 'ReduceLROnPlateau', 'RootMeanSquareDistance', 'RowTensor', 'RunContext', 'STRICT', 'SUCCESS', 'ScopedValue', 'SparseTensor', 'Stream', 'StreamCtx', 'SummaryCollector', 'SummaryLandscape', 'SummaryRecord', 'Symbol', 'SymbolTree', 'TFRecordToMR', 'Tensor', 'TensorType', 'TimeMonitor', 'Top1CategoricalAccuracy', 'Top5CategoricalAccuracy', 'TopKCategoricalAccuracy', 'TrainFaultTolerance', 'Type', '_Function', '_NullType', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_c_dataengine', '_c_expression', '_c_mindrecord', '_check_jit_forbidden_api', '_checkparam', '_extends', '_no_grad', '_null', '_op_impl', '_reuse_data_ptr', '_tft_handler', 'absolute_import', 'amp', 'arg_dtype_cast', 'arg_handler', 'async_ckpt_thread_status', 'auc', 'bfloat16', 'bool_', 'boost', 'build_searched_strategy', 'build_train_network', 'byte', 'check_checkpoint', 'ckpt_to_safetensors', 'common', 'communication', 'communication_stream', 'complex128', 'complex64', 'connect_network_with_dataset', 'constexpr', 'context', 'convert_model', 'current_stream', 'data_sink', 'dataset', 'default_config', 'default_generator', 'default_stream', 'device_context', 'device_manager', 'dispatch_threads_num', 'double', 'dryrun', 'dtype', 'dtype_to_nptype', 'dtype_to_pytype', 'empty_cache', 'experimental', 'export', 'export_split_mindir', 'float16', 'float32', 'float64', 'float_', 'flops_collection', 'from_numpy', 'get_algo_parameters', 'get_auto_parallel_context', 'get_ckpt_path_with_strategy', 'get_context', 'get_current_device', 'get_grad', 'get_level', 'get_log_config', 'get_metric_fn', 'get_obj_module_and_name_info', 'get_offload_context', 'get_ps_context', 'get_py_obj_dtype', 'get_rng_state', 'get_seed', 'grad', 'hal', 'half', 'initial_seed', 'int16', 'int32', 'int64', 'int8', 'int_', 'intc', 'intp', 'is_invalid_or_jit_forbidden_method', 'is_jit_forbidden_module', 'is_tensor', 'jacfwd', 'jacrev', 'jit', 'jit_class', 'jvp', 'launch_blocking', 'lazy_inline', 'list_', 'load', 'load_checkpoint', 'load_checkpoint_async', 'load_distributed_checkpoint', 'load_mindir', 'load_obf_params_into_net', 'load_param_into_net', 'load_segmented_checkpoints', 'log', 'manual_seed', 'max_memory_allocated', 'max_memory_reserved', 'memory_allocated', 'memory_reserved', 'memory_stats', 'memory_summary', 'merge_pipeline_strategys', 'merge_sliced_parameter', 'mindrecord', 'mint', 'ms_memory_recycle', 'mutable', 'names', 'nn', 'no_inline', 'number', 'numpy', 'obfuscate_ckpt', 'ops', 'parallel', 'parameter_broadcast', 'parse_print', 'profiler', 'pytype_to_dtype', 'qint4x2', 'rank_list_for_transform', 'rearrange_inputs', 'recompute', 'reset_algo_parameters', 'reset_auto_parallel_context', 'reset_max_memory_allocated', 'reset_max_memory_reserved', 'reset_peak_memory_stats', 'reset_ps_context', 'reshard', 'restore_group_info_list', 'rewrite', 'run_check', 'runtime', 'runtime_execution_order_check', 'safeguard', 'safetensors_to_ckpt', 'save_checkpoint', 'save_mindir', 'seed', 'set_algo_parameters', 'set_auto_parallel_context', 'set_context', 'set_cpu_affinity', 'set_cur_stream', 'set_dec_mode', 'set_deterministic', 'set_device', 'set_dump', 'set_enc_key', 'set_enc_mode', 'set_kernel_launch_group', 'set_memory', 'set_offload_context', 'set_ps_context', 'set_recursion_limit', 'set_rng_state', 'set_seed', 'shard', 'short', 'single', 'stress_detect', 'string', 'sync_pipeline_shared_parameters', 'synchronize', 'tensor', 'tensor_type', 'train', 'transform_checkpoint_by_rank', 'transform_checkpoints', 'tuple_', 'type_none', 'ubyte', 'uint', 'uint16', 'uint32', 'uint64', 'uint8', 'uintc', 'uintp', 'unified_safetensors', 'ushort', 'utils', 'value_and_grad', 'version', 'vjp', 'vmap']
import torch

print(dir(torch.distributions))
['AbsTransform', 'AffineTransform', 'Bernoulli', 'Beta', 'Binomial', 'CatTransform', 'Categorical', 'Cauchy', 'Chi2', 'ComposeTransform', 'ContinuousBernoulli', 'CorrCholeskyTransform', 'CumulativeDistributionTransform', 'Dirichlet', 'Distribution', 'ExpTransform', 'Exponential', 'ExponentialFamily', 'FisherSnedecor', 'Gamma', 'Geometric', 'Gumbel', 'HalfCauchy', 'HalfNormal', 'Independent', 'IndependentTransform', 'InverseGamma', 'Kumaraswamy', 'LKJCholesky', 'Laplace', 'LogNormal', 'LogisticNormal', 'LowRankMultivariateNormal', 'LowerCholeskyTransform', 'MixtureSameFamily', 'Multinomial', 'MultivariateNormal', 'NegativeBinomial', 'Normal', 'OneHotCategorical', 'OneHotCategoricalStraightThrough', 'Pareto', 'Poisson', 'PositiveDefiniteTransform', 'PowerTransform', 'RelaxedBernoulli', 'RelaxedOneHotCategorical', 'ReshapeTransform', 'SigmoidTransform', 'SoftmaxTransform', 'SoftplusTransform', 'StackTransform', 'StickBreakingTransform', 'StudentT', 'TanhTransform', 'Transform', 'TransformedDistribution', 'Uniform', 'VonMises', 'Weibull', 'Wishart', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'bernoulli', 'beta', 'biject_to', 'binomial', 'categorical', 'cauchy', 'chi2', 'constraint_registry', 'constraints', 'continuous_bernoulli', 'dirichlet', 'distribution', 'exp_family', 'exponential', 'fishersnedecor', 'gamma', 'geometric', 'gumbel', 'half_cauchy', 'half_normal', 'identity_transform', 'independent', 'inverse_gamma', 'kl', 'kl_divergence', 'kumaraswamy', 'laplace', 'lkj_cholesky', 'log_normal', 'logistic_normal', 'lowrank_multivariate_normal', 'mixture_same_family', 'multinomial', 'multivariate_normal', 'negative_binomial', 'normal', 'one_hot_categorical', 'pareto', 'poisson', 'register_kl', 'relaxed_bernoulli', 'relaxed_categorical', 'studentT', 'transform_to', 'transformed_distribution', 'transforms', 'uniform', 'utils', 'von_mises', 'weibull', 'wishart']

通常可以忽略以“__”(双下划线)开始和结束的函数,它们是Python中的特殊对象, 或以单个“_”(单下划线)开始的函数,它们通常是内部函数。 根据剩余的函数名或属性名,我们可能会猜测这个模块提供了各种生成随机数的方法, 包括从均匀分布(uniform)、正态分布(normal)和多项分布(multinomial)中采样。

2.7.2. 查找特定函数和类的用法

有关如何使用给定函数或类的更具体说明,可以调用help函数。 例如,我们来查看张量ones函数的用法。

import mindspore.ops as ops

help(ops.ones)
Help on function ones in module mindspore.ops.auto_generate.gen_ops_def:

ones(shape, dtype=None)
    Creates a tensor filled with value ones, whose shape and type are described by the first argument size and second argument dtype respectively.

    .. warning::
        For argument shape, Tensor type input will be deprecated in the future version.

    Args:
        shape (Union[tuple[int], list[int], int, Tensor]): The specified shape of output tensor. Only positive integer or
            tuple or Tensor containing positive integers are allowed. If it is a Tensor,
            it must be a 0-D or 1-D Tensor with int32 or int64 dtypes.
        dtype (mindspore.dtype): The specified type of output tensor. If dtype is None ,
            mindspore.float32 will be used. Default: None .

    Returns:
        Tensor, whose dtype and size are defined by input.

    Raises:
        TypeError: If shape is neither an int nor an tuple/list/Tensor of int.

    Supported Platforms:
        Ascend GPU CPU

    Examples:
        >>> import mindspore
        >>> from mindspore import ops
        >>> output = ops.ones((2, 2), mindspore.float32)
        >>> print(output)
        [[1. 1.]
         [1. 1.]]
help(torch.ones)
Help on built-in function ones in module torch:

ones(...)
    ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor

    Returns a tensor filled with the scalar value 1, with the shape defined
    by the variable argument size.

    Args:
        size (int...): a sequence of integers defining the shape of the output tensor.
            Can be a variable number of arguments or a collection like a list or tuple.

    Keyword arguments:
        out (Tensor, optional): the output tensor.
        dtype (torch.dtype, optional): the desired data type of returned tensor.
            Default: if None, uses a global default (see torch.set_default_dtype()).
        layout (torch.layout, optional): the desired layout of returned Tensor.
            Default: torch.strided.
        device (torch.device, optional): the desired device of returned tensor.
            Default: if None, uses the current device for the default tensor type
            (see torch.set_default_device()). device will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: False.

    Example::

        >>> torch.ones(2, 3)
        tensor([[ 1.,  1.,  1.],
                [ 1.,  1.,  1.]])

        >>> torch.ones(5)
        tensor([ 1.,  1.,  1.,  1.,  1.])

从文档中,我们可以看到ones函数创建一个具有指定形状的新张量,并将所有元素值设置为1。 下面来运行一个快速测试来确认这一解释:

ops.ones(4)
Tensor(shape=[4], dtype=Float32, value= [ 1.00000000e+00,  1.00000000e+00,  1.00000000e+00,  1.00000000e+00])
torch.ones(4)
tensor([1., 1., 1., 1.])

在Jupyter记事本中,我们可以使用?指令在另一个浏览器窗口中显示文档。 例如,list?指令将创建与help(list)指令几乎相同的内容,并在新的浏览器窗口中显示它。 此外,如果我们使用两个问号,如list??,将显示实现该函数的Python代码。

2.7.3. 小结

  • 官方文档提供了本书之外的大量描述和示例。

  • 可以通过调用dirhelp函数或在Jupyter记事本中使用???查看API的用法文档。

2.7.4. 练习

  1. 在深度学习框架中查找任何函数或类的文档。请尝试在这个框架的官方网站上找到文档。