LiuDongdong

爱好由来落笔难,一诗千改心始安。

ProductRelative

0. 高频低频 影响因素 读写器发射功率越大,读写距离相应的也会增大。 天线增益越大,波束宽度越小,则读写距离越远,范围越窄,读取的范围控制越好。 标签

Classify_digit

To apply a classifier on this data, we need to flatten the images, turning each 2-D array of grayscale values from shape (8, 8) into shape (64,). Subsequently, the entire dataset will be of shape (n_samples, n_features), where n_samples is the number of images and n_features is the total number of pixels in each image. label没有用onehot编

SkLearn Evaluation

1. Cross-validation cross_val_score from sklearn.model_selection import cross_val_score clf = svm.SVC(kernel='linear', C=1, random_state=42) scores = cross_val_score(clf, X, y, cv=5,scoring='f1_macro') print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std())) from sklearn import preprocessing X_train, X_test, y_train, y_test = train_test_split(

SkLearn Record

Supervised learning 1.1. Linear Models 1.2. Linear and Quadratic Discriminant Analysis 1.3. Kernel ridge regression 1.4. Support Vector Machines 1.5. Stochastic Gradient Descent 1.6. Nearest Neighbors 1.7. Gaussian Processes 1.8. Cross decomposition 1.9. Naive Bayes 1.10. Decision Trees 1.11. Ensemble methods 1.12. Multiclass and multioutput algorithms 1.13. Feature selection 1.14. Semi-supervised learning 1.15. Isotonic regression 1.16. Probability calibration 1.17. Neural network models (supervised) Unsupervised learning 2.1. Gaussian mixture models 2.2.

SkLearnVisualization

1. cross-validation(ROC) print(__doc__) import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import auc from sklearn.metrics import plot_roc_curve from sklearn.model_selection import StratifiedKFold # ############################################################################# # Data IO and generation # Import some data to play with iris = datasets.load_iris() X = iris.data y = iris.target X, y = X[y != 2], y[y != 2] n_samples, n_features = X.shape # Add noisy features random_state = np.
0%