Skip to content
Snippets Groups Projects
Select Git revision
  • main default protected
  • v0.10.4
  • v0.10.3
  • v0.10.2
  • v0.10.1
  • v0.10.0
  • v0.9.1
  • v0.9.0
  • v0.8.5
  • v0.8.4
  • v0.8.3
  • v0.8.2
  • v0.8.1
  • v0.8.0
  • v0.7.4
  • v0.7.3
  • v0.7.2
  • v0.7.1
  • v0.7.0
  • v0.6.0
  • v0.5.0
21 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.024Jan2311226Dec2322Nov21201483231Oct26181098627Sep252421181615109854130Aug282523221817873131Jul28262524181715141211543130Jun2928272524161514128632130May292827262523191696428Apr27262524232120141211107654330Mar2928261917161514131210987652127Feb2423222120191615141387653131Jan29201613121096430Dec282726232221201916131211108765429Nov25242322211918171598632131Oct2827262421191817161514131211109826Sep221915141397131Aug302928262524232221191817161513119842130Jul272221201915141312865430Jun292827242322211716[Feature] Support save_optimizer=False for DeepSpeed (#1474)Fix a typo in visualizer.py (#1476)[Feature] Add the support for musa device support (#1453)[Fix] Fix dist.collect_results to keep all ranks' elements (#1469)[Fix] Fix the resume of iteration (#1471)[Fix] Fix Config.to_dict (#1465)[Docs] Add the usage of ProfilerHook (#1466)[Docs] Fix nnodes in the doc of ddp training (#1462)bump version to v0.10.2 (#1460)v0.10.2v0.10.2[Fix] Support multi-node distributed training with NPU backend (#1459)[Fix] Fix placement policy in ColossalAIStrategy (#1440)[Fix] Fix load_model_state_dict in BaseStrategy (#1447)[Fix] Use ImportError to cover ModuleNotFoundError raised by opencv-python (#1438)bump version to v0.10.1 (#1436)v0.10.1v0.10.1[Docs] Add build mmengine-lite from source (#1435)[Fix] Fix collect_env without opencv (#1434)[Fix] Fix deploy.yml (#1431)bump version to v0.10.0 (#1430)v0.10.0v0.10.0[Feature] Support for installing mmengine without opencv (#1429)[Fix] Fix CI for torch2.1.0 (#1418)[Fix] Fix scale_lr in SingleDeviceStrategy (#1428)[Bugs] Fix bugs in colo optimwrapper (#1426)[Fix] Support exclude_frozen_parameters for DeepSpeedStrategy's resume (#1424)bump version to v0.9.1 (#1421)v0.9.1v0.9.1[Enhancement] Enhance inputs_to_half in DeepSpeedStrategy (#1400)[Feature] Add `exclude_frozen_parameters` for `DeepSpeedStrategy` (#1415)[Fix] ConcatDataset raises error when metainfo is np.array (#1407)[Fix] Fix a bug when module is missing in low version of bitsandbytes (#1388)[Fix] Fix func params using without init in OneCycleLR (#1403)[Fix] Fix new config in visualizer (#1390)[Docs] Rename master to main (#1397)Add torch 2.1.0 checking in CI (#1389)[Feature] Support slurm distributed training for mlu devices (#1396)bump version to v0.9.0 (#1384)v0.9.0v0.9.0[Docs] Fix typo (#1385)[Feature] Support using gradient checkpointing in FSDP (#1382)Update the version info (#1383)[Feature] Runner supports setting the number of iterations for per epoch (#1292)[Enhance] Support for installing minimal runtime dependencies (#1362)[Enhance] metainfo of dataset can be a generic dict-like Mapping (#1378)
Loading