Skip to content
Snippets Groups Projects
Unverified Commit f356b3c2 authored by Evan's avatar Evan Committed by GitHub
Browse files

[Docs] Minor fixes in docs to remove or replace unicode chars with ascii chars (#1018)

* Remove CN comments in EN docs code
* Replace some full-width unicode chars with half-width ascii chars
parent cbb67140
No related branches found
No related tags found
No related merge requests found
...@@ -424,7 +424,7 @@ from mmengine.registry import DATASETS ...@@ -424,7 +424,7 @@ from mmengine.registry import DATASETS
class ExampleDatasetWrapper: class ExampleDatasetWrapper:
def __init__(self, dataset, lazy_init=False, ...): def __init__(self, dataset, lazy_init=False, ...):
# Build the source datasetself.dataset # Build the source dataset (self.dataset)
if isinstance(dataset, dict): if isinstance(dataset, dict):
self.dataset = DATASETS.build(dataset) self.dataset = DATASETS.build(dataset)
elif isinstance(dataset, BaseDataset): elif isinstance(dataset, BaseDataset):
......
...@@ -31,7 +31,7 @@ wget https://raw.githubusercontent.com/open-mmlab/mmengine/main/docs/resources/c ...@@ -31,7 +31,7 @@ wget https://raw.githubusercontent.com/open-mmlab/mmengine/main/docs/resources/c
A valid configuration file should define a set of key-value pairs, and here are a few examples: A valid configuration file should define a set of key-value pairs, and here are a few examples:
Python Python:
```Python ```Python
test_int = 1 test_int = 1
...@@ -39,7 +39,7 @@ test_list = [1, 2, 3] ...@@ -39,7 +39,7 @@ test_list = [1, 2, 3]
test_dict = dict(key1='value1', key2=0.1) test_dict = dict(key1='value1', key2=0.1)
``` ```
Json Json:
```json ```json
{ {
...@@ -49,7 +49,7 @@ Json: ...@@ -49,7 +49,7 @@ Json:
} }
``` ```
YAML YAML:
```yaml ```yaml
test_int: 1 test_int: 1
...@@ -109,7 +109,7 @@ We can use the `Config` combination with the [Registry](./registry.md) to build ...@@ -109,7 +109,7 @@ We can use the `Config` combination with the [Registry](./registry.md) to build
Here is an example of defining optimizers in a configuration file. Here is an example of defining optimizers in a configuration file.
`config_sgd.py` `config_sgd.py`:
```python ```python
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001) optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
...@@ -156,13 +156,13 @@ We address these issues with inheritance mechanism, detailed as below. ...@@ -156,13 +156,13 @@ We address these issues with inheritance mechanism, detailed as below.
Here is an example to illustrate the inheritance mechanism. Here is an example to illustrate the inheritance mechanism.
`optimizer_cfg.py` `optimizer_cfg.py`:
```python ```python
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
``` ```
`resnet50.py` `resnet50.py`:
```python ```python
_base_ = ['optimizer_cfg.py'] _base_ = ['optimizer_cfg.py']
...@@ -182,13 +182,13 @@ print(cfg.optimizer) ...@@ -182,13 +182,13 @@ print(cfg.optimizer)
`_base_` is a reserved field for the configuration file. It specifies the inherited base files for the current file. Inheriting multiple files will get all the fields at the same time, but it requires that there are no repeated fields defined in all base files. `_base_` is a reserved field for the configuration file. It specifies the inherited base files for the current file. Inheriting multiple files will get all the fields at the same time, but it requires that there are no repeated fields defined in all base files.
`runtime_cfg.py` `runtime_cfg.py`:
```python ```python
gpu_ids = [0, 1] gpu_ids = [0, 1]
``` ```
`resnet50_runtime.py` `resnet50_runtime.py`:
```python ```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py'] _base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
...@@ -214,7 +214,7 @@ Sometimes, we want to modify some of the fields in the inherited files. For exam ...@@ -214,7 +214,7 @@ Sometimes, we want to modify some of the fields in the inherited files. For exam
In this case, you can simply redefine the fields in the new configuration file. Note that since the optimizer field is a dictionary, we only need to redefine the modified fields. This rule also applies to adding fields. In this case, you can simply redefine the fields in the new configuration file. Note that since the optimizer field is a dictionary, we only need to redefine the modified fields. This rule also applies to adding fields.
`resnet50_lr0.01.py` `resnet50_lr0.01.py`:
```python ```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py'] _base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
...@@ -245,7 +245,7 @@ gpu_ids = [0] ...@@ -245,7 +245,7 @@ gpu_ids = [0]
Sometimes we not only want to modify or add the keys, but also want to delete them. In this case, we need to set `_delete_=True` in the target field(`dict`) to delete all the keys that do not appear in the newly defined dictionary. Sometimes we not only want to modify or add the keys, but also want to delete them. In this case, we need to set `_delete_=True` in the target field(`dict`) to delete all the keys that do not appear in the newly defined dictionary.
`resnet50_delete_key.py` `resnet50_delete_key.py`:
```python ```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py'] _base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
...@@ -298,7 +298,7 @@ a['type'] = 'MobileNet' ...@@ -298,7 +298,7 @@ a['type'] = 'MobileNet'
The `Config` is not able to parse such a configuration file (it will raise an error when parsing). The `Config` provides a more `pythonic` way to modify base variables for `python` configuration files. The `Config` is not able to parse such a configuration file (it will raise an error when parsing). The `Config` provides a more `pythonic` way to modify base variables for `python` configuration files.
`modify_base_var.py` `modify_base_var.py`:
```python ```python
_base_ = ['resnet50.py'] _base_ = ['resnet50.py']
...@@ -335,7 +335,7 @@ optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) ...@@ -335,7 +335,7 @@ optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
model = dict(type='ResNet', depth=50) model = dict(type='ResNet', depth=50)
``` ```
Similarly, we can dump configuration files in `json`, `yaml` format Similarly, we can dump configuration files in `json`, `yaml` format:
`resnet50_dump.yaml` `resnet50_dump.yaml`
......
...@@ -36,13 +36,13 @@ Currently, we support the following initialization methods: ...@@ -36,13 +36,13 @@ Currently, we support the following initialization methods:
<tr> <tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.TruncNormalInit.html#mmengine.model.TruncNormalInit">TruncNormalInit</a></td> <td><a class="reference internal" href="../api/generated/mmengine.model.TruncNormalInit.html#mmengine.model.TruncNormalInit">TruncNormalInit</a></td>
<td>TruncNormal</td> <td>TruncNormal</td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constantcommonly used for Transformer</td> <td>Initialize the weight by truncated normal distribution, and initialize the bias with a constant, commonly used for Transformer</td>
</tr> </tr>
<tr> <tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.UniformInit.html#mmengine.model.UniformInit">UniformInit</a></td> <td><a class="reference internal" href="../api/generated/mmengine.model.UniformInit.html#mmengine.model.UniformInit">UniformInit</a></td>
<td>Uniform</td> <td>Uniform</td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constantcommonly used for convolution</td> <td>Initialize the weight by uniform distribution, and initialize the bias with a constant, commonly used for convolution</td>
</tr> </tr>
<tr> <tr>
...@@ -353,7 +353,7 @@ from mmengine.model import normal_init ...@@ -353,7 +353,7 @@ from mmengine.model import normal_init
normal_init(model, mean=0, std=0.01, bias=0) normal_init(model, mean=0, std=0.01, bias=0)
``` ```
Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization:
```python ```python
from mmengine.model import kaiming_init, xavier_init from mmengine.model import kaiming_init, xavier_init
...@@ -387,12 +387,12 @@ Currently, MMEngine provide the following initialization function: ...@@ -387,12 +387,12 @@ Currently, MMEngine provide the following initialization function:
<tr> <tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.trunc_normal_init.html#mmengine.model.trunc_normal_init">trunc_normal_init</a></td> <td><a class="reference internal" href="../api/generated/mmengine.model.trunc_normal_init.html#mmengine.model.trunc_normal_init">trunc_normal_init</a></td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constantcommonly used for Transformer</td> <td>Initialize the weight by truncated normal distribution, and initialize the bias with a constant, commonly used for Transformer</td>
</tr> </tr>
<tr> <tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.uniform_init.html#mmengine.model.uniform_init">uniform_init</a></td> <td><a class="reference internal" href="../api/generated/mmengine.model.uniform_init.html#mmengine.model.uniform_init">uniform_init</a></td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constantcommonly used for convolution</td> <td>Initialize the weight by uniform distribution, and initialize the bias with a constant, commonly used for convolution</td>
</tr> </tr>
<tr> <tr>
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Test time augmentation (TTA) is a data augmentation strategy used during the testing phase. It involves applying various augmentations, such as flipping and scaling, to the same image and then merging the predictions of each augmented image to produce a more accurate prediction. To make it easier for users to use TTA, MMEngine provides [BaseTTAModel](mmengine.model.BaseTTAModel) class, which allows users to implement different TTA strategies by simply extending the `BaseTTAModel` class according to their needs. Test time augmentation (TTA) is a data augmentation strategy used during the testing phase. It involves applying various augmentations, such as flipping and scaling, to the same image and then merging the predictions of each augmented image to produce a more accurate prediction. To make it easier for users to use TTA, MMEngine provides [BaseTTAModel](mmengine.model.BaseTTAModel) class, which allows users to implement different TTA strategies by simply extending the `BaseTTAModel` class according to their needs.
The core implementation of TTA is usually divided into two parts The core implementation of TTA is usually divided into two parts:
1. Data augmentation: This part is implemented in MMCV, see the api docs [TestTimeAug](mmcv.transforms.TestTimeAug) for more information. 1. Data augmentation: This part is implemented in MMCV, see the api docs [TestTimeAug](mmcv.transforms.TestTimeAug) for more information.
2. Merge the predictions: The subclasses of `BaseTTAModel` will merge the predictions of enhanced data in the `test_step` method to improve the accuracy of predictions. 2. Merge the predictions: The subclasses of `BaseTTAModel` will merge the predictions of enhanced data in the `test_step` method to improve the accuracy of predictions.
...@@ -119,7 +119,7 @@ image3 = dict( ...@@ -119,7 +119,7 @@ image3 = dict(
) )
``` ```
where `data_{i}_{j}` means the enhanced dataand `data_sample_{i}_{j}` means the ground truth of enhanced data. Then the data will be processed by `Dataloader`, which contributes to the following format: where `data_{i}_{j}` means the enhanced data, and `data_sample_{i}_{j}` means the ground truth of enhanced data. Then the data will be processed by `Dataloader`, which contributes to the following format:
```python ```python
data_batch = dict( data_batch = dict(
......
...@@ -1369,14 +1369,14 @@ runner = Runner( ...@@ -1369,14 +1369,14 @@ runner = Runner(
work_dir='./work_dir', work_dir='./work_dir',
randomness=randomness, randomness=randomness,
env_cfg=env_cfg, env_cfg=env_cfg,
launcher='none', # 不开启分布式训练 launcher='none',
optim_wrapper=optim_wrapper, optim_wrapper=optim_wrapper,
train_dataloader=train_dataloader, train_dataloader=train_dataloader,
train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1), train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
val_dataloader=val_dataloader, val_dataloader=val_dataloader,
val_evaluator=val_evaluator, val_evaluator=val_evaluator,
val_cfg=val_cfg, val_cfg=val_cfg,
test_dataloader=val_dataloader, # 假设测试和验证使用相同的数据和评测器 test_dataloader=val_dataloader,
test_evaluator=val_evaluator, test_evaluator=val_evaluator,
test_cfg=dict(type='TestLoop'), test_cfg=dict(type='TestLoop'),
) )
......
...@@ -66,7 +66,7 @@ def train_step(self, data, optim_wrapper): ...@@ -66,7 +66,7 @@ def train_step(self, data, optim_wrapper):
# Parse the loss dict and return the parsed losses for optimization # Parse the loss dict and return the parsed losses for optimization
# and log_vars for logging # and log_vars for logging
parsed_losses, log_vars = self.parse_losses() parsed_losses, log_vars = self.parse_losses()
optim_wrapper.update_params(parsed_losses) # 更新参数 optim_wrapper.update_params(parsed_losses)
return log_vars return log_vars
``` ```
......
...@@ -243,7 +243,7 @@ As shown in the above example, `OptimWrapperDict` exports learning rates and mom ...@@ -243,7 +243,7 @@ As shown in the above example, `OptimWrapperDict` exports learning rates and mom
### Configure the OptimWapper in [Runner](runner.md) ### Configure the OptimWapper in [Runner](runner.md)
We first need to configure the `optimizer` for the OptimWrapper. MMEngine automatically adds all optimizers in PyTorch to the `OPTIMIZERS` registry, and users can specify the optimizers they need in the form of a `dict`. All supported optimizers in PyTorch are listed [here](https://pytorch.org/docs/stable/optim.html#algorithms). In addition, `DAdaptAdaGrad`, `DAdaptAdam`, and `DAdaptSGD` can be used by installing [dadaptation](https://github.com/facebookresearch/dadaptation). `Lion` optimizer can used by install [lion-pytorch](https://github.com/lucidrains/lion-pytorch) We first need to configure the `optimizer` for the OptimWrapper. MMEngine automatically adds all optimizers in PyTorch to the `OPTIMIZERS` registry, and users can specify the optimizers they need in the form of a `dict`. All supported optimizers in PyTorch are listed [here](https://pytorch.org/docs/stable/optim.html#algorithms). In addition, `DAdaptAdaGrad`, `DAdaptAdam`, and `DAdaptSGD` can be used by installing [dadaptation](https://github.com/facebookresearch/dadaptation). `Lion` optimizer can used by install [lion-pytorch](https://github.com/lucidrains/lion-pytorch).
Now we take setting up a SGD OptimWrapper as an example. Now we take setting up a SGD OptimWrapper as an example.
......
...@@ -148,7 +148,7 @@ Note that the `begin` and `end` parameters are added here. These two parameters ...@@ -148,7 +148,7 @@ Note that the `begin` and `end` parameters are added here. These two parameters
In the above example, the `by_epoch` of `LinearLR` in the warm-up phase is False, which means that the scheduler only takes effect in the first 50 iterations. After more than 50 iterations, the scheduler will no longer take effect, and the second scheduler, which is `MultiStepLR`, will control the learning rate. When combining different schedulers, the `by_epoch` parameter does not have to be the same for each scheduler. In the above example, the `by_epoch` of `LinearLR` in the warm-up phase is False, which means that the scheduler only takes effect in the first 50 iterations. After more than 50 iterations, the scheduler will no longer take effect, and the second scheduler, which is `MultiStepLR`, will control the learning rate. When combining different schedulers, the `by_epoch` parameter does not have to be the same for each scheduler.
Here is another example Here is another example:
```python ```python
param_scheduler = [ param_scheduler = [
...@@ -200,7 +200,7 @@ param_scheduler = [ ...@@ -200,7 +200,7 @@ param_scheduler = [
MMEngine also provides a set of generic parameter schedulers for scheduling other hyperparameters in the `param_groups` of the optimizer. Change `LR` in the class name of the learning rate scheduler to `Param`, such as `LinearParamScheduler`. Users can schedule the specific hyperparameters by setting the `param_name` variable of the scheduler. MMEngine also provides a set of generic parameter schedulers for scheduling other hyperparameters in the `param_groups` of the optimizer. Change `LR` in the class name of the learning rate scheduler to `Param`, such as `LinearParamScheduler`. Users can schedule the specific hyperparameters by setting the `param_name` variable of the scheduler.
Here is an example Here is an example:
```python ```python
param_scheduler = [ param_scheduler = [
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment