DeepLabCut is a software package for markerless pose estimation of animals performing various tasks. Installation windows: anaconda environment; install anaconda; conda env create -f DLC-GPU.yaml (deeplabcut 项目文件中) ubuntu: docker container; System Overview #main step to use deepcutlab #导入库 import deeplabcut #create new project deeplabcut.create_new_project('project_name','experimenter',['path of video1','video2']) #extract frames deeplabcut.extract_frames(config_path) #label frames deeplabcut.label_frames(config_path) #check labels(optional) deeplabcut.check_labels(config_path) #create dataset deeplabcut.create_training_dataset(config_path) #training deepcutlab.train_network(config_path) #evaluate the
level: author: Zhiqiang Shen Marios Savvides(Carnegie Mello University) date: 2020,9,17 keyword: knowledge distillation; discriminators; Paper: MEALv2 Summary simplify MEAL by several methods: adopting the similarity loss and discriminator only on the final outputs; using the average of softmax probabilities from all teacher ensembles as the stronger supervision for distillation; the first to boost vanilla resnet-50 to surpass 80% on ImageNet without architecture modification or additional training data; only relies on
level: nature neuroscience author:Alexander Mathis 1,2, Pranav Mamidanna1 , Kevin M. Cury3 , Taiga Abe3 , Venkatesh N. Murthy 2 , Mackenzie Weygandt Mathis 1,4,8* and Matthias Bethge1,5,6,7,8 date: 2018 keyword: quantifying behavior; pose estimation; Mathis A, Mamidanna P, Cury K M, et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning[J]. Nature neuroscience, 2018, 21(9): 1281-1289. cited by 401; Paper: DeepLabCut Summary present an efficient method
智能冰箱,智能货架,智能衣柜(有个想法:衣服有RFID记录相关信息,衣柜有reader可以读取相关信息,可以根据天气,加上智能镜子等进行相关
1. KeyPoint Exact 1.1. VideoHandle #with face and hands bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand # Only body ./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0 # Body + face + hands ./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0 --face --hand #save to json and video ./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_video output/result.avi --write_json output/ 1.2. Webcam_Handle :: With face and hands bin\OpenPoseDemo.exe --face --hand 1.3. Images_Handle :: With face
1. Installation $ git clone https://github.com/google/mediapipe.git # Change directory into MediaPipe root directory $ cd mediapipe #install Bazel #link https://blog.csdn.net/liudongdong19 #install opencv and ffmpeg sudo apt-get install libopencv-core-dev libopencv-highgui-dev \libopencv-calib3d-dev libopencv-features2d-dev \libopencv-imgproc-dev libopencv-video-dev # Requires a GPU with EGL driver support. # Can use mesa GPU libraries for desktop, (or Nvidia/AMD equivalent). sudo apt-get install mesa-common-dev libegl1-mesa-dev libgles2-mesa-dev # To compile with GPU support, replace --define MEDIAPIPE_DISABLE_GPU=1 # with --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 $ export GLOG_logtostderr=1 # if you are running on Linux desktop with CPU only $ bazel run --define MEDIAPIPE_DISABLE_GPU=1 \ mediapipe/examples/desktop/hello_world:hello_world # If you are running on Linux desktop with GPU support enabled (via mesa drivers) $ bazel run --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 \ mediapipe/examples/desktop/hello_world:hello_world 2.