DeepCutLab_usage
DeepLabCut is a software package for markerless pose estimation of animals performing various tasks.
Installation
- windows: anaconda environment;
- install anaconda;
- conda env create -f DLC-GPU.yaml (deeplabcut 项目文件中)
- ubuntu: docker container;
System Overview
#main step to use deepcutlab
#导入库
import deeplabcut
#create new project
deeplabcut.create_new_project('project_name','experimenter',['path of video1','video2'])
#extract frames
deeplabcut.extract_frames(config_path)
#label frames
deeplabcut.label_frames(config_path)
#check labels(optional)
deeplabcut.check_labels(config_path)
#create dataset
deeplabcut.create_training_dataset(config_path)
#training
deepcutlab.train_network(config_path)
#evaluate the trained net
deeplabcut.evaluate_network(config_path)
#video analyze
deeplabcut.analyze_videos(config_path,['path of videofolder'])
#plot result
deeplabcut.plot_trajectories(config_path,['path of video'])
#create a video
deeplabcut.create_labeled_video(config_path,['path of video'])
Directory Structure
- dlc-models: holds the meta information with regard to the parameters of the feature detectors in cfg;
- test;
- train;
- labeled-data: store the frames used to create training dataset;
- training dataset: contain the training dataset and metadata about how the training dataset was created;
- videos:
-
mutlti-tracking: create_new_project and set flag multianimal=True;
-
TUTORIALS: video tutorials that demonstrate various aspects of using the code base.
-
HOW-TO-GUIDES: step-by-step user guidelines for using DeepLabCut on your own datasets (or on demo data)
-
EXPLANATIONS: resources on understanding how DeepLabCut works
-
online course here!**
ScenariesUsages
- I have single animal videos, but I want to use the advanced tracking features & updated network capabilities introduced (for multi-animal projects) in DLC2.2:
- quick start: when you
create_new_project
just set the flagmultianimal=True
.
- quick start: when you
Some tips: i.e. this is good for say, a hand or a mouse if you feel the “skeleton” during training would increase performance. DON’T do this for things that could be identified an individual objects. i.e., don’t do whisker 1, whisker 2, whisker 3 as 3 individuals. Each whisker always has a specific spatial location, and by calling them individuals you will do WORSE than in single animal mode.
-
I have single animal videos, but I want to use new features within in DLC2.2:
- quick start: when you
create_new_project
just set the flagmultianimal=Flase
, but you still get lots of upgrades! This is the typical work path for many of you.
- quick start: when you
-
I have multiple identical-looking animals in my videos and I need to use DLC2.2:
- quick start: when you
create_new_project
set the flagmultianimal=True
. If you can’t tell them apart, you can assign the “individual” ID to any animal in each frame. See this labeling w/2.2 demo video
- quick start: when you
- I have multiple animals, but I can tell them apart, in my videos and want to use DLC2.2:
- quick start: when you
create_new_project
set the flagmultianimal=True
. And always label the “individual” ID name the same; i.e. if you have mouse1 and mouse2 but mouse2 always has a miniscope, in every frame label mouse2 consistently. See this labeling w/2.2 demo video
- quick start: when you
🎥 VIDEO TUTORIAL AVAILABLE! - ALSO, if you can tell them apart, label animals them consistently!
- I have a pre-2.2 single animal project, but I want to use 2.2:
- support 2camera for 3D estimation;
- deepcutlab2.2 with extention for multitracking;
Please read this convert 2 maDLC guide
DemoUsages
python
import deeplabcut
deeplabcut.launch_dlc() #open GUI
commandline
python -m deeplabcut
Relative Paper
- DeepLabCut: markerless pose estimation of user-defined body parts with deep learning
- Using DeepLabCut for 3D markerless pose estimation across species and behaviors
- DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model
- Deep learning tools for the measurement of animal behavior in neuroscience