当前位置: 首页 > news >正文

机器学习-数值特征

离散值处理

import pandas as pd
import numpy as np
vg_df = pd.read_csv('datasets/vgsales.csv', encoding = "ISO-8859-1")
vg_df[['Name', 'Platform', 'Year', 'Genre', 'Publisher']].iloc[1:7]
NamePlatformYearGenrePublisher
1Super Mario Bros.NES1985.0PlatformNintendo
2Mario Kart WiiWii2008.0RacingNintendo
3Wii Sports ResortWii2009.0SportsNintendo
4Pokemon Red/Pokemon BlueGB1996.0Role-PlayingNintendo
5TetrisGB1989.0PuzzleNintendo
6New Super Mario Bros.DS2006.0PlatformNintendo
genres = np.unique(vg_df['Genre'])
genres
array(['Action', 'Adventure', 'Fighting', 'Misc', 'Platform', 'Puzzle','Racing', 'Role-Playing', 'Shooter', 'Simulation', 'Sports','Strategy'], dtype=object)

LabelEncoder

from sklearn.preprocessing import LabelEncodergle = LabelEncoder()
genre_labels = gle.fit_transform(vg_df['Genre'])
genre_mappings = {index: label for index, label in enumerate(gle.classes_)}
genre_mappings
{0: 'Action',1: 'Adventure',2: 'Fighting',3: 'Misc',4: 'Platform',5: 'Puzzle',6: 'Racing',7: 'Role-Playing',8: 'Shooter',9: 'Simulation',10: 'Sports',11: 'Strategy'}
vg_df['GenreLabel'] = genre_labels
vg_df[['Name', 'Platform', 'Year', 'Genre', 'GenreLabel']].iloc[1:7]
NamePlatformYearGenreGenreLabel
1Super Mario Bros.NES1985.0Platform4
2Mario Kart WiiWii2008.0Racing6
3Wii Sports ResortWii2009.0Sports10
4Pokemon Red/Pokemon BlueGB1996.0Role-Playing7
5TetrisGB1989.0Puzzle5
6New Super Mario Bros.DS2006.0Platform4

Map

poke_df = pd.read_csv('datasets/Pokemon.csv', encoding='utf-8')
poke_df = poke_df.sample(random_state=1, frac=1).reset_index(drop=True)np.unique(poke_df['Generation'])
array(['Gen 1', 'Gen 2', 'Gen 3', 'Gen 4', 'Gen 5', 'Gen 6'], dtype=object)
gen_ord_map = {'Gen 1': 1, 'Gen 2': 2, 'Gen 3': 3, 'Gen 4': 4, 'Gen 5': 5, 'Gen 6': 6}poke_df['GenerationLabel'] = poke_df['Generation'].map(gen_ord_map)
poke_df[['Name', 'Generation', 'GenerationLabel']].iloc[4:10]
NameGenerationGenerationLabel
4OctilleryGen 22
5HelioptileGen 66
6DialgaGen 44
7DeoxysDefense FormeGen 33
8RapidashGen 11
9SwannaGen 55

One-hot Encoding

poke_df[['Name', 'Generation', 'Legendary']].iloc[4:10]
NameGenerationLegendary
4OctilleryGen 2False
5HelioptileGen 6False
6DialgaGen 4True
7DeoxysDefense FormeGen 3True
8RapidashGen 1False
9SwannaGen 5False
from sklearn.preprocessing import OneHotEncoder, LabelEncoder# transform and map pokemon generations
gen_le = LabelEncoder()
gen_labels = gen_le.fit_transform(poke_df['Generation'])
poke_df['Gen_Label'] = gen_labels# transform and map pokemon legendary status
leg_le = LabelEncoder()
leg_labels = leg_le.fit_transform(poke_df['Legendary'])
poke_df['Lgnd_Label'] = leg_labelspoke_df_sub = poke_df[['Name', 'Generation', 'Gen_Label', 'Legendary', 'Lgnd_Label']]
poke_df_sub.iloc[4:10]
NameGenerationGen_LabelLegendaryLgnd_Label
4OctilleryGen 21False0
5HelioptileGen 65False0
6DialgaGen 43True1
7DeoxysDefense FormeGen 32True1
8RapidashGen 10False0
9SwannaGen 54False0
# encode generation labels using one-hot encoding scheme
gen_ohe = OneHotEncoder()
gen_feature_arr = gen_ohe.fit_transform(poke_df[['Gen_Label']]).toarray()
gen_feature_labels = list(gen_le.classes_)
print (gen_feature_labels)
gen_features = pd.DataFrame(gen_feature_arr, columns=gen_feature_labels)# encode legendary status labels using one-hot encoding scheme
leg_ohe = OneHotEncoder()
leg_feature_arr = leg_ohe.fit_transform(poke_df[['Lgnd_Label']]).toarray()
leg_feature_labels = ['Legendary_'+str(cls_label) for cls_label in leg_le.classes_]
print (leg_feature_labels)
leg_features = pd.DataFrame(leg_feature_arr, columns=leg_feature_labels)
['Gen 1', 'Gen 2', 'Gen 3', 'Gen 4', 'Gen 5', 'Gen 6']
['Legendary_False', 'Legendary_True']
poke_df_ohe = pd.concat([poke_df_sub, gen_features, leg_features], axis=1)
columns = sum([['Name', 'Generation', 'Gen_Label'],gen_feature_labels,['Legendary', 'Lgnd_Label'],leg_feature_labels], [])
poke_df_ohe[columns].iloc[4:10]
NameGenerationGen_LabelGen 1Gen 2Gen 3Gen 4Gen 5Gen 6LegendaryLgnd_LabelLegendary_FalseLegendary_True
4OctilleryGen 210.01.00.00.00.00.0False01.00.0
5HelioptileGen 650.00.00.00.00.01.0False01.00.0
6DialgaGen 430.00.00.01.00.00.0True10.01.0
7DeoxysDefense FormeGen 320.00.01.00.00.00.0True10.01.0
8RapidashGen 101.00.00.00.00.00.0False01.00.0
9SwannaGen 540.00.00.00.01.00.0False01.00.0

Get Dummy

gen_dummy_features = pd.get_dummies(poke_df['Generation'], drop_first=True)
pd.concat([poke_df[['Name', 'Generation']], gen_dummy_features], axis=1).iloc[4:10]
NameGenerationGen 2Gen 3Gen 4Gen 5Gen 6
4OctilleryGen 210000
5HelioptileGen 600001
6DialgaGen 400100
7DeoxysDefense FormeGen 301000
8RapidashGen 100000
9SwannaGen 500010
gen_onehot_features = pd.get_dummies(poke_df['Generation'])
pd.concat([poke_df[['Name', 'Generation']], gen_onehot_features], axis=1).iloc[4:10]
NameGenerationGen 1Gen 2Gen 3Gen 4Gen 5Gen 6
4OctilleryGen 2010000
5HelioptileGen 6000001
6DialgaGen 4000100
7DeoxysDefense FormeGen 3001000
8RapidashGen 1100000
9SwannaGen 5000010
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import scipy.stats as spstats%matplotlib inline
mpl.style.reload_library()
mpl.style.use('classic')
mpl.rcParams['figure.facecolor'] = (1, 1, 1, 0)
mpl.rcParams['figure.figsize'] = [6.0, 4.0]
mpl.rcParams['figure.dpi'] = 100
poke_df = pd.read_csv('datasets/Pokemon.csv', encoding='utf-8')
poke_df.head()
#NameType 1Type 2TotalHPAttackDefenseSp. AtkSp. DefSpeedGenerationLegendary
01BulbasaurGrassPoison318454949656545Gen 1False
12IvysaurGrassPoison405606263808060Gen 1False
23VenusaurGrassPoison52580828310010080Gen 1False
33VenusaurMega VenusaurGrassPoison6258010012312212080Gen 1False
44CharmanderFireNaN309395243605065Gen 1False
poke_df[['HP', 'Attack', 'Defense']].head()
HPAttackDefense
0454949
1606263
2808283
380100123
4395243
poke_df[['HP', 'Attack', 'Defense']].describe()
HPAttackDefense
count800.000000800.000000800.000000
mean69.25875079.00125073.842500
std25.53466932.45736631.183501
min1.0000005.0000005.000000
25%50.00000055.00000050.000000
50%65.00000075.00000070.000000
75%80.000000100.00000090.000000
max255.000000190.000000230.000000
popsong_df = pd.read_csv('datasets/song_views.csv', encoding='utf-8')
popsong_df.head(10)
user_idsong_idtitlelisten_count
0b6b799f34a204bd928ea014c243ddad6d0be4f8fSOBONKR12A58A7A7E0You're The One2
1b41ead730ac14f6b6717b9cf8859d5579f3f8d4dSOBONKR12A58A7A7E0You're The One0
24c84359a164b161496d05282707cecbd50adbfc4SOBONKR12A58A7A7E0You're The One0
3779b5908593756abb6ff7586177c966022668b06SOBONKR12A58A7A7E0You're The One0
4dd88ea94f605a63d9fc37a214127e3f00e85e42dSOBONKR12A58A7A7E0You're The One0
568f0359a2f1cedb0d15c98d88017281db79f9bc6SOBONKR12A58A7A7E0You're The One0
6116a4c95d63623a967edf2f3456c90ebbf964e6fSOBONKR12A58A7A7E0You're The One17
745544491ccfcdc0b0803c34f201a6287ed4e30f8SOBONKR12A58A7A7E0You're The One0
8e701a24d9b6c59f5ac37ab28462ca82470e27cfbSOBONKR12A58A7A7E0You're The One68
9edc8b7b1fd592a3b69c3d823a742e1a064abec95SOBONKR12A58A7A7E0You're The One0

二值特征

watched = np.array(popsong_df['listen_count']) 
watched[watched >= 1] = 1
popsong_df['watched'] = watched
popsong_df.head(10)
user_idsong_idtitlelisten_countwatched
0b6b799f34a204bd928ea014c243ddad6d0be4f8fSOBONKR12A58A7A7E0You're The One21
1b41ead730ac14f6b6717b9cf8859d5579f3f8d4dSOBONKR12A58A7A7E0You're The One00
24c84359a164b161496d05282707cecbd50adbfc4SOBONKR12A58A7A7E0You're The One00
3779b5908593756abb6ff7586177c966022668b06SOBONKR12A58A7A7E0You're The One00
4dd88ea94f605a63d9fc37a214127e3f00e85e42dSOBONKR12A58A7A7E0You're The One00
568f0359a2f1cedb0d15c98d88017281db79f9bc6SOBONKR12A58A7A7E0You're The One00
6116a4c95d63623a967edf2f3456c90ebbf964e6fSOBONKR12A58A7A7E0You're The One171
745544491ccfcdc0b0803c34f201a6287ed4e30f8SOBONKR12A58A7A7E0You're The One00
8e701a24d9b6c59f5ac37ab28462ca82470e27cfbSOBONKR12A58A7A7E0You're The One681
9edc8b7b1fd592a3b69c3d823a742e1a064abec95SOBONKR12A58A7A7E0You're The One00
from sklearn.preprocessing import Binarizerbn = Binarizer(threshold=0.9)
pd_watched = bn.transform([popsong_df['listen_count']])[0]
popsong_df['pd_watched'] = pd_watched
popsong_df.head(11)
user_idsong_idtitlelisten_countwatchedpd_watched
0b6b799f34a204bd928ea014c243ddad6d0be4f8fSOBONKR12A58A7A7E0You're The One211
1b41ead730ac14f6b6717b9cf8859d5579f3f8d4dSOBONKR12A58A7A7E0You're The One000
24c84359a164b161496d05282707cecbd50adbfc4SOBONKR12A58A7A7E0You're The One000
3779b5908593756abb6ff7586177c966022668b06SOBONKR12A58A7A7E0You're The One000
4dd88ea94f605a63d9fc37a214127e3f00e85e42dSOBONKR12A58A7A7E0You're The One000
568f0359a2f1cedb0d15c98d88017281db79f9bc6SOBONKR12A58A7A7E0You're The One000
6116a4c95d63623a967edf2f3456c90ebbf964e6fSOBONKR12A58A7A7E0You're The One1711
745544491ccfcdc0b0803c34f201a6287ed4e30f8SOBONKR12A58A7A7E0You're The One000
8e701a24d9b6c59f5ac37ab28462ca82470e27cfbSOBONKR12A58A7A7E0You're The One6811
9edc8b7b1fd592a3b69c3d823a742e1a064abec95SOBONKR12A58A7A7E0You're The One000
10fb41d1c374d093ab643ef3bcd70eeb258d479076SOBONKR12A58A7A7E0You're The One111

多项式特征

atk_def = poke_df[['Attack', 'Defense']]
atk_def.head()
AttackDefense
04949
16263
28283
3100123
45243
from sklearn.preprocessing import PolynomialFeaturespf = PolynomialFeatures(degree=2, interaction_only=False, include_bias=False)
res = pf.fit_transform(atk_def)
res
array([[    49.,     49.,   2401.,   2401.,   2401.],[    62.,     63.,   3844.,   3906.,   3969.],[    82.,     83.,   6724.,   6806.,   6889.],..., [   110.,     60.,  12100.,   6600.,   3600.],[   160.,     60.,  25600.,   9600.,   3600.],[   110.,    120.,  12100.,  13200.,  14400.]])
intr_features = pd.DataFrame(res, columns=['Attack', 'Defense', 'Attack^2', 'Attack x Defense', 'Defense^2'])
intr_features.head(5)
AttackDefenseAttack^2Attack x DefenseDefense^2
049.049.02401.02401.02401.0
162.063.03844.03906.03969.0
282.083.06724.06806.06889.0
3100.0123.010000.012300.015129.0
452.043.02704.02236.01849.0

binning特征

fcc_survey_df = pd.read_csv('datasets/fcc_2016_coder_survey_subset.csv', encoding='utf-8')
fcc_survey_df[['ID.x', 'EmploymentField', 'Age', 'Income']].head()
ID.xEmploymentFieldAgeIncome
0cef35615d61b202f1dc794ef2746df14office and administrative support28.032000.0
1323e5a113644d18185c743c241407754food and beverage22.015000.0
2b29a1027e5cd062e654a63764157461dfinance19.048000.0
304a11e4bcb573a1261eb0d9948d32637arts, entertainment, sports, or media26.043000.0
49368291c93d5d5f5c8cdb1a575e18beceducation20.06000.0
fig, ax = plt.subplots()
fcc_survey_df['Age'].hist(color='#A9C5D3')
ax.set_title('Developer Age Histogram', fontsize=12)
ax.set_xlabel('Age', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
Text(0,0.5,'Frequency')

在这里插入图片描述

Binning based on rounding

Age Range: Bin
---------------0 -  9  : 0
10 - 19  : 1
20 - 29  : 2
30 - 39  : 3
40 - 49  : 4
50 - 59  : 5
60 - 69  : 6... and so on
fcc_survey_df['Age_bin_round'] = np.array(np.floor(np.array(fcc_survey_df['Age']) / 10.))
fcc_survey_df[['ID.x', 'Age', 'Age_bin_round']].iloc[1071:1076]
ID.xAgeAge_bin_round
10716a02aa4618c99fdb3e24de522a09943117.01.0
1072f0e5e47278c5f248fe861c5f7214c07a38.03.0
10736e14f6d0779b7e424fa3fdd9e4bd3bf921.02.0
1074c2654c07dc929cdf3dad4d1aec4ffbb353.05.0
1075f07449fc9339b2e57703ec788623252335.03.0

分位数切分

fcc_survey_df[['ID.x', 'Age', 'Income']].iloc[4:9]
ID.xAgeIncome
49368291c93d5d5f5c8cdb1a575e18bec20.06000.0
5dd0e77eab9270e4b67c19b0d6bbf621b34.040000.0
67599c0aa0419b59fd11ffede98a3665d23.032000.0
76dff182db452487f07a47596f314bddc35.040000.0
89dc233f8ed1c6eb2432672ab4bb3924933.080000.0
fig, ax = plt.subplots()
fcc_survey_df['Income'].hist(bins=30, color='#A9C5D3')
ax.set_title('Developer Income Histogram', fontsize=12)
ax.set_xlabel('Developer Income', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
Text(0,0.5,'Frequency')

在这里插入图片描述

quantile_list = [0, .25, .5, .75, 1.]
quantiles = fcc_survey_df['Income'].quantile(quantile_list)
quantiles
0.00      6000.0
0.25     20000.0
0.50     37000.0
0.75     60000.0
1.00    200000.0
Name: Income, dtype: float64
fig, ax = plt.subplots()
fcc_survey_df['Income'].hist(bins=30, color='#A9C5D3')for quantile in quantiles:qvl = plt.axvline(quantile, color='r')
ax.legend([qvl], ['Quantiles'], fontsize=10)ax.set_title('Developer Income Histogram with Quantiles', fontsize=12)
ax.set_xlabel('Developer Income', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
Text(0,0.5,'Frequency')

在这里插入图片描述

quantile_labels = ['0-25Q', '25-50Q', '50-75Q', '75-100Q']
fcc_survey_df['Income_quantile_range'] = pd.qcut(fcc_survey_df['Income'], q=quantile_list)
fcc_survey_df['Income_quantile_label'] = pd.qcut(fcc_survey_df['Income'], q=quantile_list, labels=quantile_labels)
fcc_survey_df[['ID.x', 'Age', 'Income', 'Income_quantile_range', 'Income_quantile_label']].iloc[4:9]
ID.xAgeIncomeIncome_quantile_rangeIncome_quantile_label
49368291c93d5d5f5c8cdb1a575e18bec20.06000.0(5999.999, 20000.0]0-25Q
5dd0e77eab9270e4b67c19b0d6bbf621b34.040000.0(37000.0, 60000.0]50-75Q
67599c0aa0419b59fd11ffede98a3665d23.032000.0(20000.0, 37000.0]25-50Q
76dff182db452487f07a47596f314bddc35.040000.0(37000.0, 60000.0]50-75Q
89dc233f8ed1c6eb2432672ab4bb3924933.080000.0(60000.0, 200000.0]75-100Q

对数变换 COX-BOX

fcc_survey_df['Income_log'] = np.log((1+ fcc_survey_df['Income']))
fcc_survey_df[['ID.x', 'Age', 'Income', 'Income_log']].iloc[4:9]
ID.xAgeIncomeIncome_log
49368291c93d5d5f5c8cdb1a575e18bec20.06000.08.699681
5dd0e77eab9270e4b67c19b0d6bbf621b34.040000.010.596660
67599c0aa0419b59fd11ffede98a3665d23.032000.010.373522
76dff182db452487f07a47596f314bddc35.040000.010.596660
89dc233f8ed1c6eb2432672ab4bb3924933.080000.011.289794
income_log_mean = np.round(np.mean(fcc_survey_df['Income_log']), 2)fig, ax = plt.subplots()
fcc_survey_df['Income_log'].hist(bins=30, color='#A9C5D3')
plt.axvline(income_log_mean, color='r')
ax.set_title('Developer Income Histogram after Log Transform', fontsize=12)
ax.set_xlabel('Developer Income (log scale)', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.text(11.5, 450, r'$\mu$='+str(income_log_mean), fontsize=10)
Text(11.5,450,'$\\mu$=10.43')

日期相关特征

import datetime
import numpy as np
import pandas as pd
from dateutil.parser import parse
import pytz
time_stamps = ['2015-03-08 10:30:00.360000+00:00', '2017-07-13 15:45:05.755000-07:00','2012-01-20 22:30:00.254000+05:30', '2016-12-25 00:30:00.000000+10:00']
df = pd.DataFrame(time_stamps, columns=['Time'])
df

在这里插入图片描述

Time
02015-03-08 10:30:00.360000+00:00
12017-07-13 15:45:05.755000-07:00
22012-01-20 22:30:00.254000+05:30
32016-12-25 00:30:00.000000+10:00
ts_objs = np.array([pd.Timestamp(item) for item in np.array(df.Time)])
df['TS_obj'] = ts_objs
ts_objs
array([Timestamp('2015-03-08 10:30:00.360000+0000', tz='UTC'),Timestamp('2017-07-13 15:45:05.755000-0700', tz='pytz.FixedOffset(-420)'),Timestamp('2012-01-20 22:30:00.254000+0530', tz='pytz.FixedOffset(330)'),Timestamp('2016-12-25 00:30:00+1000', tz='pytz.FixedOffset(600)')], dtype=object)
df['Year'] = df['TS_obj'].apply(lambda d: d.year)
df['Month'] = df['TS_obj'].apply(lambda d: d.month)
df['Day'] = df['TS_obj'].apply(lambda d: d.day)
df['DayOfWeek'] = df['TS_obj'].apply(lambda d: d.dayofweek)
df['DayName'] = df['TS_obj'].apply(lambda d: d.weekday_name)
df['DayOfYear'] = df['TS_obj'].apply(lambda d: d.dayofyear)
df['WeekOfYear'] = df['TS_obj'].apply(lambda d: d.weekofyear)
df['Quarter'] = df['TS_obj'].apply(lambda d: d.quarter)df[['Time', 'Year', 'Month', 'Day', 'Quarter', 'DayOfWeek', 'DayName', 'DayOfYear', 'WeekOfYear']]
TimeYearMonthDayQuarterDayOfWeekDayNameDayOfYearWeekOfYear
02015-03-08 10:30:00.360000+00:0020153816Sunday6710
12017-07-13 15:45:05.755000-07:00201771333Thursday19428
22012-01-20 22:30:00.254000+05:30201212014Friday203
32016-12-25 00:30:00.000000+10:002016122546Saturday36051

时间相关特征

df['Hour'] = df['TS_obj'].apply(lambda d: d.hour)
df['Minute'] = df['TS_obj'].apply(lambda d: d.minute)
df['Second'] = df['TS_obj'].apply(lambda d: d.second)
df['MUsecond'] = df['TS_obj'].apply(lambda d: d.microsecond)   #毫秒
df['UTC_offset'] = df['TS_obj'].apply(lambda d: d.utcoffset()) #UTC时间位移df[['Time', 'Hour', 'Minute', 'Second', 'MUsecond', 'UTC_offset']]
TimeHourMinuteSecondMUsecondUTC_offset
02015-03-08 10:30:00.360000+00:001030036000000:00:00
12017-07-13 15:45:05.755000-07:0015455755000-1 days +17:00:00
22012-01-20 22:30:00.254000+05:302230025400005:30:00
32016-12-25 00:30:00.000000+10:000300010:00:00

按照早晚切分时间

hour_bins = [-1, 5, 11, 16, 21, 23]
bin_names = ['Late Night', 'Morning', 'Afternoon', 'Evening', 'Night']
df['TimeOfDayBin'] = pd.cut(df['Hour'], bins=hour_bins, labels=bin_names)
df[['Time', 'Hour', 'TimeOfDayBin']]
TimeHourTimeOfDayBin
02015-03-08 10:30:00.360000+00:0010Morning
12017-07-13 15:45:05.755000-07:0015Afternoon
22012-01-20 22:30:00.254000+05:3022Night
32016-12-25 00:30:00.000000+10:000Late Night

相关文章:

机器学习-数值特征

离散值处理 import pandas as pd import numpy as npvg_df pd.read_csv(datasets/vgsales.csv, encoding "ISO-8859-1") vg_df[[Name, Platform, Year, Genre, Publisher]].iloc[1:7]NamePlatformYearGenrePublisher1Super Mario Bros.NES1985.0PlatformNintendo2…...

Rocky(centos)安装nginx并设置开机自启

一、安装nginx 1、安装依赖 yum install -y gcc-c pcre pcre-devel zlib zlib-devel openssl openssl-devel 2、去官网下载最新的稳定版nginx nginx: downloadhttp://nginx.org/en/download.html 3、将下载后的nginx上传至/usr/local下 或者执行 #2023-10-8更新 cd /usr/…...

Android约束布局ConstraintLayout的Guideline,CardView

Android约束布局ConstraintLayout的Guideline&#xff0c;CardView <?xml version"1.0" encoding"utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android"http://schemas.android.com/apk/res/android"xmlns:a…...

LVGL8.3.6 Flex(弹性布局)

使用lv_obj_set_flex_flow(obj, flex_flow)函数 横向拖动 LV_FLEX_FLOW_ROW 将子元素排成一排而不包裹 LV_FLEX_FLOW_ROW_WRAP 将孩子排成一排并包裹起来 LV_FLEX_FLOW_ROW_REVERSE 将子元素排成一行而不换行&#xff0c;但顺序相反 LV_FLEX_FLOW_ROW_WRAP_REVERSE 将子元素…...

计算机算法分析与设计(8)---图像压缩动态规划算法(含C++)代码

文章目录 一、知识概述1.1 问题描述1.2 算法思想1.3 算法设计1.4 例题分析 二、代码 一、知识概述 1.1 问题描述 1. 一幅图像的由很多个像素点构成&#xff0c;像素点越多分辨率越高&#xff0c;像素的灰度值范围为0~255&#xff0c;也就是需要8bit来存储一个像素的灰度值信息…...

React 状态管理 - Mobx 入门(上)

Mobx是另一款优秀的状态管理方案 【让我们未来多一种状态管理选型】 响应式状态管理工具 扩展学习资料 名称 链接 备注 mobx 文档 1. MobX 介绍 MobX 中文文档 mobx https://medium.com/Zwenza/how-to-persist-your-mobx-state-4b48b3834a41 英文 Mobx核心概念 M…...

OLED透明屏技术在智能手机、汽车和广告领域的市场前景

OLED透明屏技术作为一种新型的显示技术&#xff0c;具有高透明度、触摸和手势交互、高画质和图像显示效果等优势&#xff0c;引起了广泛的关注。 随着智能手机、汽车和广告等行业的快速发展&#xff0c;OLED透明屏技术也在这些领域得到了广泛的应用。 本文将介绍OLED透明屏技…...

考研是为了逃避找工作的压力吗?

如果逃避眼前的现实&#xff0c; 越是逃就越是会陷入痛苦的境地&#xff0c;要有面对问题的勇气&#xff0c;渡过这个困境的话&#xff0c;应该就能一点点地解决问题。 众所周知&#xff0c;考研初试在大四上学期的十二月份&#xff0c;通常最晚的开始准备时间是大三暑假&…...

广州华锐互动:VR动物解剖实验室带来哪些便利?

随着科技的不断发展&#xff0c;我们的教育方式也在逐步变化和进步。其中&#xff0c;虚拟现实(VR)技术的应用为我们提供了一种全新的学习方式。尤其是在动物解剖实验中&#xff0c;VR技术不仅能够增强学习的趣味性&#xff0c;还能够提高学习效率和准确性。 由广州华锐互动开发…...

Uniapp 婚庆服务全套模板前端

包含 首页、社区、关于、我的、预约、订购、选购、话题、主题、收货地址、购物车、系统通知、会员卡、优惠券、积分、储值金、订单信息、积分、充值、礼品、首饰等 请观看 图片参观 开源&#xff0c;下载即可 链接&#xff1a;婚庆服务全套模板前端 - DCloud 插件市场 问题反…...

RabbitMQ-网页使用消息队列

1.使用消息队列 几种模式 从最简单的开始 添加完新的虚拟机可以看到&#xff0c;当前admin用户的主机访问权限中新增的刚添加的环境 1.1查看交换机 交换机列表中自动新增了刚创建好的虚拟主机相关的预设交换机。一共7个。前面两个 direct类型的交换机&#xff0c;一个是…...

弹性资源组件elastic-resource设计(四)-任务管理器和资源消费者规范

简介 弹性资源组件提供动态资源能力&#xff0c;是分布式系统关键基础设施&#xff0c;分布式datax&#xff0c;分布式索引&#xff0c;事件引擎都需要集群和资源的弹性资源能力&#xff0c;提高伸缩性和作业处理能力。 本文介绍弹性资源组件的设计&#xff0c;包括架构设计和详…...

【Java】微服务——RabbitMQ消息队列(SpringAMQP实现五种消息模型)

目录 1.初识MQ1.1.同步和异步通讯1.1.1.同步通讯1.1.2.异步通讯 1.2.技术对比&#xff1a; 2.快速入门2.1.RabbitMQ消息模型2.4.1.publisher实现2.4.2.consumer实现 2.5.总结 3.SpringAMQP3.1.Basic Queue 简单队列模型3.1.1.消息发送3.1.2.消息接收3.1.3.测试 3.2.WorkQueue3.…...

react高阶成分(HOC)实践例子

以下是一个使用React函数式组件的高阶组件示例&#xff0c;它用于添加身份验证功能&#xff1a; import React, { useState, useEffect } from react;// 定义一个高阶组件&#xff0c;它接受一个组件作为输入&#xff0c;并返回一个新的包装组件 const withAuthentication (W…...

20231005使用ffmpeg旋转MP4视频

20231005使用ffmpeg旋转MP4视频 2023/10/5 12:21 百度搜搜&#xff1a;ffmpeg 旋转90度 https://zhuanlan.zhihu.com/p/637790915 【FFmpeg实战】FFMPEG常用命令行 https://blog.csdn.net/weixin_37515325/article/details/127817057 FFMPEG常用命令行 5.视频旋转 顺时针旋转…...

MySQL-锁

MySQL的锁机制 1.共享锁(Shared Lock)和排他锁(Exclusive Lock) 事务不能同时具有行共享锁和排他锁&#xff0c;如果事务想要获取排他锁&#xff0c;前提是行没有共享锁和排他锁。而共享锁&#xff0c;只要行没有排他锁都能获取到。 手动开启共享锁/排他锁&#xff1a; -- 对…...

ES6中变量解构赋值

数组的解构赋值 ES6规定以一定模式从数组、对象中提取值&#xff0c;然后给变量赋值叫做解构。 本质上就是一种匹配模式&#xff0c;等号两边模式相同&#xff0c;左边的变量就能对应的值。 假如解构不成功会赋值为undefined。 不需要匹配的位置可以置空 let [ a, b, c] …...

Dijkstra 邻接表表示算法 | 贪心算法实现--附C++/JAVA实现源码

以下是详细步骤。 创建大小为 V 的最小堆,其中 V 是给定图中的顶点数。最小堆的每个节点包含顶点编号和顶点的距离值。 以源顶点为根初始化最小堆(分配给源顶点的距离值为0)。分配给所有其他顶点的距离值为 INF(无限)。 当最小堆不为空时,执行以下操作: 从最小堆中提取…...

从城市吉祥物进化到虚拟人IP需要哪些步骤?

在2023年成都全国科普日主场活动中&#xff0c;推出了全国首个科普数字形象大使“科普熊猫”&#xff0c;科普熊猫作为成都科普吉祥物&#xff0c;是如何进化为虚拟人IP&#xff0c;通过动作捕捉、AR等技术&#xff0c;活灵活现地出现在大众眼前的&#xff1f; 以广州虚拟动力虚…...

认识SQLServer

深入认识SQL Server&#xff1a;从基础到高级的数据库管理 在当今数字时代&#xff0c;数据是企业成功的关键。为了存储、管理和分析数据&#xff0c;数据库管理系统&#xff08;DBMS&#xff09;变得至关重要。其中&#xff0c;Microsoft SQL Server是一款备受欢迎的关系型数据…...

Python开发IDE的比较:PyCharm vs. VS Code vs. Jupyter

Python开发IDE的比较&#xff1a;PyCharm vs. VS Code vs. Jupyter Python开发社区中已经存在了相当长时间的持续争论&#xff1a;PyCharm vs. VS Code vs. Jupyter。 PyCharm&#xff1a;专业人士的选择 让我们从PyCharm开始。它是一个功能强大的集成开发环境&#xff08;I…...

1206. 设计跳表

不使用任何库函数&#xff0c;设计一个 跳表 。 class Skiplist {int level0;Node headnull;public Skiplist() {}public boolean search(int target) {Node curhead;while(cur!null){while(cur.right!null&&cur.right.val<target){curcur.right;}if(cur.right!nul…...

【API要返回一棵树的结构】数据库表结构是平铺的数据,但是api要实现树状结构展示。api实现一棵树的结构,如何实现呢,递归?如何递归呢

数据库中的数据是平铺的&#xff0c;一行行的&#xff0c;但是api要查询出来的数据要求是一棵树的结构&#xff0c; 怎么把平铺的数据转换成树状结构呢&#xff1f; public List<CarbonRepo> findCarbonRepo(Integer type){// 1. 先查出所有数据。 baseFindList 方法就是…...

视频批量剪辑工具,自定义视频速率,批量剪辑工具助力创意无限”

在视频制作的世界里&#xff0c;每一个细节都至关重要。今天&#xff0c;让我们来探索一项强大且创新的功能——自定义视频速率。利用它&#xff0c;你可以轻松地调整视频播放速度&#xff0c;赋予你的作品独特的个性和风格。 首先第一步&#xff0c;我们要打开好简单批量智剪…...

starrocks启动和停止和重启脚本

StarRocks启动和停止和重启脚本 编辑脚本&#xff1a;vim start_stop_starrocks.sh 备注:IP修改为自己的IP即可 #!/bin/bashcase $1 in "start"){for i in 12.3.7.147 12.3.7.148 12.3.7.149 12.3.7.150doecho " --------启动 $i be -------"ssh $i &qu…...

升级Xcode 15后,出现大量Duplicate symbols问题

https://developer.apple.com/forums/thread/731090 升级到Xcode 15后&#xff0c;原先Xcode14可以编译的项目出现大量Duplicate symbols&#xff0c;且引用报错指向同一个路径&#xff08;一般为Framework&#xff09;下的同一个文件。经过查找相关解决&#xff0c;可通过添加…...

Godot2D角色导航教程(角色随鼠标移动)

文章目录 运行结果2D导航概述开始前的准备2D导航创建导航网格创建角色 其他文章 运行结果 2D导航概述 Godot为2D和3D游戏提供了多个对象、类和服务器&#xff0c;以便于基于网格或基于网格的导航和路径查找。 说到导航&#xff0c;就得说一下导航网格&#xff0c;导航网格定义…...

论文阅读--Cell-free massive MIMO versus small cells

无蜂窝大规模MIMO与小蜂窝网络 论文信息 Ngo H Q, Ashikhmin A, Yang H, et al. Cell-free massive MIMO versus small cells[J]. IEEE Transactions on Wireless Communications, 2017, 16(3): 1834-1850. 无蜂窝大规模MIMO中没有小区或者小区边界的界定&#xff0c;所有接入…...

【深度学习】UniControl 一个统一的扩散模型用于可控的野外视觉生成

论文&#xff1a;https://arxiv.org/abs/2305.11147 代码&#xff1a;https://github.com/salesforce/UniControl#data-preparation docker快速部署&#xff1a;https://qq742971636.blog.csdn.net/article/details/133129146 文章目录 AbstractIntroductionRelated WorksUniCo…...

使用ChatGPT和MindShow一分钟生成PPT模板

对于最近学校组织的实习答辩&#xff0c;由于时间太短了&#xff0c;而且小编也特别的忙&#xff0c;于是就用ChatGPT结合MindShow一分钟快速生成PPT&#xff0c;确实很实用。只要你跟着小编后面&#xff0c;你也可以快速制作出这个PPT&#xff0c;下面小编就来详细介绍一下&am…...