Offering transformation pipeline;
transforms.Compose([ transforms.CenterCrop(10), transforms.ToTensor(), ]) 1. PIL Image Op CenterCrop(size) # Crops the given PIL Image at the center. ColorJitter(brightness=0, contrast=0, saturation=0, hue=0)#Randomly change the brightness, contrast and saturation of an image. FiveCrop(size) #Crop the given PIL Image into four corners and the central crop Grayscale(num_output_channels=1) # convert image to grayscale RandomAffine(degrees, translate=None, scale=None, shear=None, resample=False, fillcolor=0)#Random affine transformation of the image keeping center invariant RandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant')#Crop the given PIL Image at a random location.
All datasets are subclasses of torch.utils.data.Dataset i.e, they have __getitem__ and __len__ methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader which can load multiple samples parallelly using torch.multiprocessing workers. Dataloader: offer a way to parallelly load data, batch load, and offer shuffle policy. Dataset: the dataset entry, offer getitem function., Transformer function 在这里执行
Torchvision.Models contain different models like: image classification, pixelwise semantic segmentation, object detection, instance segmentation, person keypoint detection and video classification. 1. Classification Models AlexNet VGGResNetSqueezeNetDenseNetInception v3GoogLeNetShuffleNet v2MobileNet v2ResNeXtWide ResNetMNASNet 1.1. Random weights import torchvision.models as models resnet18 = models.resnet18() alexnet = models.alexnet() vgg16 = models.vgg16() squeezenet = models.squeezenet1_0() densenet = models.densenet161() inception = models.inception_v3() googlenet = models.googlenet() shufflenet = models.shufflenet_v2_x1_0() mobilenet = models.mobilenet_v2() resnext50_32x4d = models.resnext50_32x4d() wide_resnet50_2 = models.wide_resnet50_2() mnasnet
0. Relative API CountVectorizer() 词频统计 >>> from sklearn.feature_extraction.text import CountVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = CountVectorizer() >>> X = vectorizer.fit_transform(corpus) >>> print(vectorizer.get_feature_names()) ['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this'] >>> print(X.toarray()) [[0 1 1 1
**数组(可以存储基本数据类型)**是用来存现对象的一种容器,但是数组的长度固定,不适合在对象数量未知的情况下使用。 集合(只能存储对象,对象
1. set .1. TreeSet 基于红黑树实现,支持有序性操作,例如根据一个范围查找元素的操作。但是查找效率不如 HashSet,HashSet 查找的时间复杂度为 O(