当前位置: 首页 > news >正文

Dataset for Stable Diffusion

1.Dataset for Stable Diffusion

笔记来源:
1.Flickr8k数据集处理
2.处理Flickr8k数据集
3.Github:pytorch-stable-diffusion
4.Flickr 8k Dataset
5.dataset_flickr8k.json

1.1 Dataset

采用Flicker8k数据集,该数据集有两个文件,第一个文件为Flicker8k_Dataset (全部为图片),第二个文件为Flickr8k.token.txt (含两列image_id和caption),其中一个image_id对应5个caption (sentence)

1.2 Dataset description file

数据集文本描述文件:dataset_flickr8k.json
文件格式如下:
{“images”: [ {“sentids”: [ ],“imgid”: 0,“sentences”:[{“tokens”:[ ]}, {“tokens”:[ ], “raw”: “…”, “imgid”:0, “sentid”:0}, …, “split”: “train”, “filename”: …jpg}, {“sentids”…} ], “dataset”: “flickr8k”}

参数解释
“sentids”:[0,1,2,3,4]caption 的 id 范围(一个image对应5个caption,所以sentids从0到4)
“imgid”:0image 的 id(从0到7999共8000张image)
“sentences”:[ ]包含一张照片的5个caption
“tokens”:[ ]每个caption分割为单个word
“raw”: " "每个token连接起来的caption
“imgid”: 0与caption相匹配的image的id
“sentid”: 0imag0对应的具体的caption的id
“split”:" "将该image和对应caption划分到训练集or验证集or测试集
“filename”:“…jpg”image具体名称

dataset_flickr8k.json

1.3 Process Datasets

下面代码引用自:Flickr8k数据集处理(仅作学习使用)

import json
import os
import random
from collections import Counter, defaultdict
from matplotlib import pyplot as plt
from PIL import Image
from argparse import Namespace
import numpy as np
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence
from torch.utils.data import Dataset
import torchvision
import torchvision.transforms as transformsdef create_dataset(dataset='flickr8k', captions_per_image=5, min_word_count=5, max_len=30):"""Parameters:dataset: Name of the datasetcaptions_per_image: Number of captions per imagemin_word_count: Only consider words that appear at least this many times in the dataset (excluding the test set)max_len: Maximum number of words in a caption. Captions longer than this will be truncated.Output:A vocabulary file: vocab.jsonThree dataset files: train_data.json, val_data.json, test_data.json"""# Paths for reading data and saving processed data# Path to the dataset JSON fileflickr_json_path = ".../sd/data/dataset_flickr8k.json"# Folder containing imagesimage_folder = ".../sd/data/Flicker8k_Dataset"# Folder to save processed results# The % operator is used to format the string by replacing %s with the value of the dataset variable.# For example, if dataset is "flickr8k", the resulting output_folder will be# /home/wxy/Documents/PycharmProjects/pytorch-stable-diffusion/sd/data/flickr8k.output_folder = ".../sd/data/%s" % dataset# Ensure output directory existsos.makedirs(output_folder, exist_ok=True)print(f"Output folder: {output_folder}")# Read the dataset JSON filewith open(file=flickr_json_path, mode="r") as j:data = json.load(fp=j)# Initialize containers for image paths, captions, and vocabulary# Dictionary to store image pathsimage_paths = defaultdict(list)# Dictionary to store image captionsimage_captions = defaultdict(list)# Count the number of elements, then count and return a dictionary# key:element value:the number of elements.vocab = Counter()# read from file dataset_flickr8k.jsonfor img in data["images"]:  # Iterate over each image in the datasetsplit = img["split"]  # Determine the split (train, val, or test) for the imagecaptions = []for c in img["sentences"]:  # Iterate over each caption for the image# Update word frequency count, excluding test set dataif split != "test":  # Only update vocabulary for train/val splits# c['tokens'] is a list, The number of occurrences of each word in the list is increased by onevocab.update(c['tokens'])  # Update vocabulary with words in the caption# Only consider captions that are within the maximum lengthif len(c["tokens"]) <= max_len:captions.append(c["tokens"])  # Add the caption to the list if it meets the length requirementif len(captions) == 0:  # Skip images with no valid captionscontinue# Construct the full image path/home/wxy/Documents/PycharmProjects/pytorch-stable-diffusion# image_folder + image_name# ./Flicker8k_Dataset/img['filename']path = os.path.join(image_folder, img['filename'])# Save the full image path and its captions in the respective dictionariesimage_paths[split].append(path)image_captions[split].append(captions)'''After the above steps, we have:- vocab(a dict) keys:words、values: counts of all words- image_paths: (a dict) keys "train", "val", and "test"; values: lists of absolute image paths- image_captions: (a dict) keys: "train", "val", and "test"; values: lists of captions'''/home/wxy/Documents/PycharmProjects/pytorch-stable-diffusion........

我们通过dataset_flickr8k.json文件把数据集转化为三个词典

dictkeyvalue
vacabwordfrequency of words in all captions
image_path“train”、“val”、“test”lists of absolute image path
image_captions“train”、“val”、“test”lists of captions

我们通过Debug打印其中的内容

print(vocab)
print(image_paths["train"][1])
print(image_captions["train"][1])

def create_dataset(dataset='flickr8k', captions_per_image=5, min_word_count=5, max_len=30):"""Parameters:dataset: Name of the datasetcaptions_per_image: Number of captions per imagemin_word_count: Only consider words that appear at least this many times in the dataset (excluding the test set)max_len: Maximum number of words in a caption. Captions longer than this will be truncated.Output:A vocabulary file: vocab.jsonThree dataset files: train_data.json, val_data.json, test_data.json"""........# Create the vocabulary, adding placeholders for special tokens# Add placeholders<pad>, unregistered word identifiers<unk>, sentence beginning and end identifiers<start><end>words = [w for w in vocab.keys() if vocab[w] > min_word_count]  # Filter words by minimum countvocab = {k: v + 1 for v, k in enumerate(words)}  # Create the vocabulary with indices# Add special tokens to the vocabularyvocab['<pad>'] = 0vocab['<unk>'] = len(vocab)vocab['<start>'] = len(vocab)vocab['<end>'] = len(vocab)# Save the vocabulary to a filewith open(os.path.join(output_folder, 'vocab.json'), "w") as fw:json.dump(vocab, fw)# Process each dataset split (train, val, test)# Iterate over each split: split = "train" 、 split = "val" 和 split = "test"for split in image_paths:# List of image paths for the splitimgpaths = image_paths[split]  # type(imgpaths)=list# List of captions for the splitimcaps = image_captions[split]  # type(imcaps)=list# store result that converting words of caption to their respective indices in the vocabularyenc_captions = []for i, path in enumerate(imgpaths):# Check if the image can be openedimg = Image.open(path)# Ensure each image has the required number of captionsif len(imcaps[i]) < captions_per_image:filled_num = captions_per_image - len(imcaps[i])# Repeat captions if neededcaptions = imcaps[i] + [random.choice(imcaps[i]) for _ in range(0, filled_num)]else:# Randomly sample captions if there are more than neededcaptions = random.sample(imcaps[i], k=captions_per_image)assert len(captions) == captions_per_imagefor j, c in enumerate(captions):# Encode each caption by converting words to their respective indices in the vocabularyenc_c = [vocab['<start>']] + [vocab.get(word, vocab['<unk>']) for word in c] + [vocab["<end>"]]enc_captions.append(enc_c)assert len(imgpaths) * captions_per_image == len(enc_captions)data = {"IMAGES": imgpaths,"CAPTIONS": enc_captions}# Save the processed dataset for the current split (train,val,test)with open(os.path.join(output_folder, split + "_data.json"), 'w') as fw:json.dump(data, fw)create_dataset()

经过create_dataset函数,我们得到如下图的文件

四个文件的详细内容见下表

train_data.json中的第一个key:IMAGES
train_data.json中的第二个key:CAPTIONS
test_data.json中的第一个key:IMAGES
test_data.json中的第二个key:CAPTIONS
val_data.json中的第一个key:IMAGES
val_data.json中的第二个key:CAPTIONS
vocab.json开始部分
vocab.json结尾部分

生成vocab.json的关键代码
首先统计所有caption中word出现至少大于5次的word,而后给这些word依次赋予一个下标

# Create the vocabulary, adding placeholders for special tokens# Add placeholders<pad>, unregistered word identifiers<unk>, sentence beginning and end identifiers<start><end># Create a list of words from the vocabulary that have a frequency higher than 'min_word_count'# min_word_count: Only consider words that appear at least this many times in the dataset (excluding the test set)words = [w for w in vocab.keys() if vocab[w] > min_word_count]  # Filter words by minimum count# assign an index to each word, starting from 1 (indices start from 0, so add 1)vocab = {k: v + 1 for v, k in enumerate(words)}  # Create the vocabulary with indices

最终生成vocab.json

生成 [“split”]_data.json 的关键
读入文件dataset_flickr8k.json,并创建两个字典,第一个字典放置每张image的绝对路径,第二个字典放置描述image的caption,根据vocab将token换为下标保存,根据文件dataset_flickr8k.json中不同的split,这image的绝对路径和相应caption保存在不同文件中(train_data.json、test_data.json、val_data.json)

dataset_flickr8k.json

train_data.json


从vocab中获取token的下标得到CAPTION的编码

for j, c in enumerate(captions):# Encode each caption by converting words to their respective indices in the vocabularyenc_c = [vocab['<start>']] + [vocab.get(word, vocab['<unk>']) for word in c] + [vocab["<end>"]]enc_captions.append(enc_c)


尝试使用上面生成的测试集文件test_data.json和vocab.json输出某张image以及对应的caption
下面代码引用自:Flickr8k数据集处理(仅作学习使用)

'''
test
1.Iterates over the 5 captions for 下面代码引用自:[Flickr8k数据集处理](https://blog.csdn.net/weixin_48981284/article/details/134676813)(仅作学习使用)the 250th image.
2.Retrieves the word indices for each caption.
3.Converts the word indices to words using vocab_idx2word.
4.Joins the words to form complete sentences.
5.Prints each caption.
'''
import json
from PIL import Image
from matplotlib import pyplot as plt
# Load the vocabulary from the JSON file
with open('.../sd/data/flickr8k/vocab.json', 'r') as f:vocab = json.load(f)  # Load the vocabulary from the JSON file into a dictionary
# Create a dictionary to map indices to words
vocab_idx2word = {idx: word for word, idx in vocab.items()}
# Load the test data from the JSON file
with open('.../sd/data/flickr8k/test_data.json', 'r') as f:data = json.load(f)  # Load the test data from the JSON file into a dictionary
# Open and display the 250th image in the test set
# Open the image at index 250 in the 'IMAGES' list
content_img = Image.open(data['IMAGES'][250])
plt.figure(figsize=(6, 6))
plt.subplot(1,1,1)
plt.imshow(content_img)
plt.title('Image')
plt.axis('off')
plt.show()
# Print the lengths of the data, image list, and caption list
# Print the number of keys in the dataset dictionary (should be 2: 'IMAGES' and 'CAPTIONS')
print(len(data))
print(len(data['IMAGES']))  # Print the number of images in the 'IMAGES' list
print(len(data["CAPTIONS"]))  # Print the number of captions in the 'CAPTIONS' list
# Display the captions for the 300th image
# Iterate over the 5 captions associated with the 300th image
for i in range(5):# Get the word indices for the i-th caption of the 300th imageword_indices = data['CAPTIONS'][250 * 5 + i]# Convert indices to words and join them to form a captionprint(''.join([vocab_idx2word[idx] for idx in word_indices]))

data 的 key 有两个 IMAGES 和 CAPTIONS
测试集image有1000张,每张对应5个caption,共5000个caption
第250张图片的5个caption如下图

1.4 Dataloader

下面代码引用自:Flickr8k数据集处理(仅作学习使用)

import json
import os
import random
from collections import Counter, defaultdict
from PIL import Image
import torch
from torch.utils.data import Dataset
from torch.utils import data
import torchvision.transforms as transformsclass ImageTextDataset(Dataset):"""Pytorch Dataset class to generate data batches using torch DataLoader"""def __init__(self, dataset_path, vocab_path, split, captions_per_image=5, max_len=30, transform=None):"""Parameters:dataset_path: Path to the JSON file containing the datasetvocab_path: Path to the JSON file containing the vocabularysplit: The dataset split, which can be "train", "val", or "test"captions_per_image: Number of captions per imagemax_len: Maximum number of words per captiontransform: Image transformation methods"""self.split = split# Validate that the split is one of the allowed valuesassert self.split in {"train", "val", "test"}# Store captions per imageself.cpi = captions_per_image# Store maximum caption lengthself.max_len = max_len# Load the datasetwith open(dataset_path, "r") as f:self.data = json.load(f)# Load the vocabularywith open(vocab_path, "r") as f:self.vocab = json.load(f)# Store the image transformation methodsself.transform = transform# Number of captions in the dataset# Calculate the size of the datasetself.dataset_size = len(self.data["CAPTIONS"])def __getitem__(self, i):"""Retrieve the i-th sample from the dataset"""# Get [i // self.cpi]-th image corresponding to the i-th sample (each image has multiple captions)img = Image.open(self.data['IMAGES'][i // self.cpi]).convert("RGB")# Apply image transformation if providedif self.transform is not None:# Apply the transformation to the imageimg = self.transform(img)# Get the length of the captioncaplen = len(self.data["CAPTIONS"][i])# Pad the caption if its length is less than max_lenpad_caps = [self.vocab['<pad>']] * (self.max_len + 2 - caplen)# Convert the caption to a tensor and pad itcaption = torch.LongTensor(self.data["CAPTIONS"][i] + pad_caps)return img, caption, caplen  # Return the image, caption, and caption lengthdef __len__(self):return self.dataset_size  # Number of samples in the datasetdef make_train_val(data_dir, vocab_path, batch_size, workers=4):"""Create DataLoader objects for training, validation, and testing sets.Parameters:data_dir: Directory where the dataset JSON files are locatedvocab_path: Path to the vocabulary JSON filebatch_size: Number of samples per batchworkers: Number of subprocesses to use for data loading (default is 4)Returns:train_loader: DataLoader for the training setval_loader: DataLoader for the validation settest_loader: DataLoader for the test set"""# Define transformation for training settrain_tx = transforms.Compose([transforms.Resize(256),  # Resize images to 256x256transforms.ToTensor(),  # Convert image to PyTorch tensortransforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # Normalize using ImageNet mean and std])val_tx = transforms.Compose([transforms.Resize(256),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])# Create dataset objects for training, validation, and test setstrain_set = ImageTextDataset(dataset_path=os.path.join(data_dir, "train_data.json"), vocab_path=vocab_path,split="train", transform=train_tx)vaild_set = ImageTextDataset(dataset_path=os.path.join(data_dir, "val_data.json"), vocab_path=vocab_path,split="val", transform=val_tx)test_set = ImageTextDataset(dataset_path=os.path.join(data_dir, "test_data.json"), vocab_path=vocab_path,split="test", transform=val_tx)# Create DataLoader for training set with data shufflingtrain_loder = data.DataLoader(dataset=train_set, batch_size=batch_size, shuffer=True,num_workers=workers, pin_memory=True)# Create DataLoader for validation set without data shufflingval_loder = data.DataLoader(dataset=vaild_set, batch_size=batch_size, shuffer=False,num_workers=workers, pin_memory=True, drop_last=False)# Create DataLoader for test set without data shufflingtest_loder = data.DataLoader(dataset=test_set, batch_size=batch_size, shuffer=False,num_workers=workers, pin_memory=True, drop_last=False)return train_loder, val_loder, test_loder

创建好train_loader后,接下来我们就可以着手开始训练SD了!

相关文章:

Dataset for Stable Diffusion

1.Dataset for Stable Diffusion 笔记来源&#xff1a; 1.Flickr8k数据集处理 2.处理Flickr8k数据集 3.Github&#xff1a;pytorch-stable-diffusion 4.Flickr 8k Dataset 5.dataset_flickr8k.json 1.1 Dataset 采用Flicker8k数据集&#xff0c;该数据集有两个文件&#xff…...

近期matlab学习笔记,学习是一个记录,反复的过程

近期matlab学习笔记&#xff0c;学习是一个记录&#xff0c;反复的过程 matlab的mlx文件在运行的时候&#xff0c;不需要在文件夹路径下&#xff0c;也能运行&#xff0c;但是需要调用子函数时&#xff0c;就需要在文件所在路径下运行 那就先运行子函数&#xff0c;把路径换过来…...

Elasticsearch7.5.2 常用rest api与elasticsearch库

目录 一、rest api 1. 新建索引 2. 删除索引 3. 插入单条数据 4. 更新单条数据 5. 删除单条数据 6. 查询数据 二、python elasticsearch库 1. 新建索引 一、rest api 1. 新建索引 请求方式&#xff1a;PUT 请求URL&#xff1a;http://ip/&#xff08;your_index_nam…...

Autosar Dcm配置-0x28服务ComControl-基于ETAS软件

文章目录 前言DcmDcmDsdDcmDspBswMBswMModeRequestPortBswMModeConditionBswMLogicalExpressionBswMActionBswMActionListBswMRule总结前言 0x28服务主要用来控制非诊断报文的通讯,一般在刷写预编程过程中,用来禁止APP的通信报文,可以减少总线负载率,提高刷写成功率。本文…...

平安养老险厦门分公司:提升金融服务,发挥金融力量

为向社会公众普及金融保险知识&#xff0c;传递消费者权益保护理念&#xff0c;平安养老保险股份有限公司厦门分公司&#xff08;以下简称“分公司”&#xff09;积极开展“78保险公众宣传日”系列教育宣传活动。分公司紧扣“保险&#xff0c;让每一步前行更有底气”主题&#…...

【开源合规】开源许可证风险场景详细解读

文章目录 前言关于BlackDuck许可证风险对比图弱互惠型许可证举个例子具体示例LGPL系列LGPL-2.0-onlyLGPL-2.0-or-laterLGPL-2.1-onlyLGPL-2.1-or-laterLGPL-3.0-onlyLGPL-3.0-or-laterMPL系列MPL-1.0MPL-1.1MPL-2.0EPL系列EPL-1.0EPL-2.0互惠型许可证GPL系列GPL-1.0GPL-2.0GPL-…...

Redis持久化RDB,AOF

目 录 CONFIG动态修改配置 慢查询 持久化 在上一篇主要对redis的了解入门&#xff0c;安装&#xff0c;以及基础配置&#xff0c;多实例的实现&#xff1a;redis的安装看我上一篇&#xff1a; Redis安装部署与使用,多实例 redis是挡在MySQL前面的&#xff0c;运行在内存…...

【持续集成_03课_Linux部署Sonar+Gogs+Jenkins】

一、通过虚拟机搭建Linux环境-CnetOS 1、安装virtualbox&#xff0c;和Vmware是一样的&#xff0c;只是box更轻量级 1&#xff09;需要注意内存选择&#xff0c;4G 2、启动完成后&#xff0c;需要获取服务器IP地址 命令 ip add 服务器IP地址 通过本地的工具&#xff0c;进…...

mvcc 速读

MVCC&#xff08;Multi-Version Concurrency Control&#xff0c;多版本并发控制&#xff09;是MySQL中一种用于实现数据库并发控制的方法&#xff0c;尤其在InnoDB存储引擎中得到了广泛应用。它的主要作用是提高数据库在高并发场景下的性能&#xff0c;并确保数据的一致性。 …...

美容仪维修过程记录

近期维修的家用射频美容仪&#xff0c;发一些维修过程的拆机图片...

STM32入门开发操作记录(一)——新建工程

目录 一、课程准备1. 课程资料2. 配件清单3. 根目录 二、环境搭建三、新建工程1. 载入器件支持包2. 添加模块3. ST配置4. 外观设置5. 主函数文件 一、课程准备 1. 课程资料 本记录操作流程参考自b站视频BV1th411z7snSTM32入门教程-2023版 细致讲解 中文字幕&#xff0c;课程资…...

QT实现自定义带有提示信息的透明环形进度条

1. 概述 做界面开发的童鞋可能都会遇到这样的需求&#xff0c;就是有一些界面点击了之后比较耗时的操作&#xff0c;需要界面给出一个环形进度条的进度反馈信息. 如何来实现这样的需求呢&#xff0c;话不多说&#xff0c;上效果 透明进度条 2. 代码实现 waitfeedbackprogressba…...

金币程序题

昨天&#xff0c;小孩问了我一个python编程竞赛题&#xff0c;我看了一下题目&#xff0c;是一个数列编程的问题&#xff0c;我在想&#xff0c;小学五年级的学生能搞得懂吗&#xff1f;反正我家小孩是没有搞懂&#xff0c;不知道别人家的小孩能不能搞明白。所以我花了一点时间…...

《Windows API每日一练》9.13资源-鼠标位图和字符串

鼠标指针位图&#xff08;Mouse Cursor Bitmap&#xff09;是用于表示鼠标指针外观的图像。在 Windows 窗口编程中&#xff0c;可以使用自定义的鼠标指针位图来改变鼠标的外观&#xff0c;并提供更加个性化的用户体验。 ■以下是一些与鼠标指针位图相关的要点&#xff1a; ●…...

【保姆级教程】CenterNet的目标检测、3D检测、关键点检测使用教程

一、代码下载 仓库地址:https://github.com/xingyizhou/CenterNet?tab=readme-ov-file 二、目标检测 2.1 下载预训练权重 下载预训练权重ctdet_coco_dla_2x.pth放到models文件夹下 下载链接:https://drive.google.com/file/d/18Q3fzzAsha_3Qid6mn4jcIFPeOGUaj1d/edit …...

thinkphp:数据库复合查询-OR的使用

完整代码 $data[info] db::table(po_headers_all)->alias(ph) //设置wip_jobs_all的别名->join([vendors > ve], ph.vendor_codeve.vendor_code)->field(ph.po_num,ph.status,ph.vendor_code,ve.vendor_name,ph.po_all_amount,ph.note,ph.order_date,ph.need_dat…...

网络安全那些梗

网络安全领域的梗往往以幽默、讽刺或夸张的方式反映了该领域的某些现象、挑战或误解。以下是一些网络安全相关的梗&#xff1a; 关掉服务器是最有效的安全方法&#xff1a;这个梗源自一个笑话&#xff0c;讲述了一位程序员因误解妻子的话而只买了一个包子回家&#xff0c;随后被…...

交通气象站:保障道路安全的智慧之眼

随着社会的快速发展&#xff0c;交通运输日益繁忙&#xff0c;道路安全成为公众关注的焦点。在这个背景下&#xff0c;交通气象站作为保障道路安全的重要设施&#xff0c;正发挥着越来越重要的作用。它们不仅为交通管理部门提供及时、准确的气象信息&#xff0c;也为广大驾驶员…...

【分库】分库的核心原则

目录 分库的核心原则 前言 分区透明性与一致性保证 弹性伸缩性与容错性设计 数据安全与访问控制机制 分库的核心原则 前言 在设计和实施分库策略时&#xff0c;遵循一系列核心原则是至关重要的&#xff0c;以确保系统不仅能够在当前规模下高效运行&#xff0c;还能够随着…...

【Linux】软件管理工具 yum

文章目录 概念搜索&#xff1a;yum list安装&#xff1a;yum install卸载&#xff1a;yum remove 概念 在Linux下安装软件&#xff0c;可以下载到程序的源代码&#xff0c;进行编译得到可执行程序&#xff0c;另外这些软件还有依赖其它工具的问题&#xff0c;还得下载编译这些依…...

LangChain —— Prompt Templates

文章目录 一、什么是 Prompt Templates1、String PromptTemplates2、ChatPromptTemplates3、MessagesPlaceholder 留言占位符 二、如何使用 Prompt Templates1、使用几个简短示例2、在 chat model 中使用几个简短示例3、部分格式化提示模板4、一起编写提示 一、什么是 Prompt T…...

Python库 - Scrapy

Scrapy 是一个用于爬取网站数据、提取结构性数据的开源和协作框架。它最初是为网页抓取设计的&#xff0c;但也可以用于获取 API 提供的数据或作为通用的网络爬虫。 文章目录 主要特性主要组件使用流程1. 安装 Scrapy2. 创建 Scrapy 项目3. 定义 Item&#xff08;数据&#xff…...

函数(实参以及形参)

实际参数&#xff08;实参&#xff09; 实际参数就是在调用函数时传递给函数的具体值。这些值可以是常量、变量、表达式或更复杂的数据结构。实参的值在函数被调用时传递给对应的形参&#xff0c;然后函数内部就可以使用这些值来执行相应的操作。 int main() {int a 0;int b …...

ArcGIS Pro SDK (八)地理数据库 8 拓扑

ArcGIS Pro SDK &#xff08;八&#xff09;地理数据库 8 拓扑 文章目录 ArcGIS Pro SDK &#xff08;八&#xff09;地理数据库 8 拓扑1 开放拓扑和进程定义2 获取拓扑规则3 验证拓扑4 获取拓扑错误5 标记和不标记为错误6 探索拓扑图7 找到最近的元素 环境&#xff1a;Visual …...

uniapp如何发送websocket请求

方法1&#xff1a; onLoad() {uni.connectSocket({url: ws://127.0.0.1:8000/ws/stat/realTimeStat/,success: (res) > {console.log(connect success, res);}});uni.onSocketOpen(function (res) {console.log(WebSocket连接已打开&#xff01;);uni.sendSocketMessage({d…...

RabbitMQ的工作模式

RabbitMQ的工作模式 Hello World 模式 #mermaid-svg-sbc2QNYZFRQYbEib {font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;fill:#333;}#mermaid-svg-sbc2QNYZFRQYbEib .error-icon{fill:#552222;}#mermaid-svg-sbc2QNYZFRQYbEib .error-text{fi…...

自建搜索引擎-基于美丽云

Meilisearch 是一个搜索引擎&#xff0c;主程序完全开源&#xff0c;除了使用官方提供的美丽云服务&#xff08;收费&#xff09;进行对接之外&#xff0c;还可以通过自建搜索引擎来实现完全独立的搜索服务。 由于成本问题&#xff0c;本博客采用自建的方式&#xff0c;本文就…...

2024辽宁省大学数学建模竞赛试题思路

A题 (1) 建立模型分析低空顺风风切变对起飞和降落的影响 模型假设 飞机被视为质点&#xff0c;忽略其尺寸和形状对风阻的影响。风切变仅考虑顺风方向的变化&#xff0c;忽略其他方向的风切变。飞机的飞行速度、高度和姿态&#xff08;如迎角、俯仰角&#xff09;是变化的&am…...

循环结构(一)——for语句【互三互三】

文章目录 &#x1f341; 引言 &#x1f341; 一、语句格式 &#x1f341; 二、语句执行过程 &#x1f341; 三、语句格式举例 &#x1f341;四、例题 &#x1f449;【例1】 &#x1f680;示例代码: &#x1f449;【例2】 【方法1】 &#x1f680;示例代码: 【方法2】…...

【深度学习基础】MacOS PyCharm连接远程服务器

目录 一、需求描述二、建立与服务器的远程连接1. 新版Pycharm的界面有什么不同&#xff1f;2. 创建远程连接3. 建立本地项目与远程服务器项目之间的路径映射4.设置保存自动上传文件 三、设置解释器总结 写在前面&#xff0c;本人用的是Macbook Pro&#xff0c; M3 MAX处理器&am…...

重庆万州网站建设多少钱/黄页网推广服务

2.配置VTY(Virtual Teletype Terminal)虚拟终端接口的认证方式[H3C]user-interface vty 0 4[H3C-line-vty0-4]authentication-mode scheme//进行本地或远端用户名和口令认证。即AAA认证//关于认证&#xff0c;一共有三种认证方式//password 本地口令认证;//scheme 本地或远端用…...

b2b有哪些电商平台网站/百度广告推广费用

01、 与其他版本系统的区别 SVN,CVS等是集中式的版本控制系统&#xff0c;而Git则是分布式的版本控制系统&#xff0c;为什么叫分布式的版本控制系统呢&#xff1f; 因为Git客户端并不只是提取最新版本的文件快照&#xff0c;而是把代码仓库完成地镜像下来。这么一来&#xff0…...

nginx wordpress 配置/百度搜索app下载

手把手制作一个IDEA插件(Demo搭建篇) - 掘金...

营销型网站建设风格设定包括哪些方面?/德芙巧克力的软文500字

记住&#xff0c;表达式是值和操作符的组合&#xff0c;它们可以通过求值成为单个值。“数据类 型”是一类值&#xff0c;每个值都只属于一种数据类型。表 1-2 列出了Python 中最常见的数据类型。例如&#xff0c;值-2 和 30 属于“整型”值。整型&#xff08;或 int&#xff0…...

wordpress 附件密码保护/电脑软件推广平台

一、动图演示 二、思路分析 1. 相邻两个数两两相比&#xff0c;n[i]跟n[j1]比&#xff0c;如果n[i]>n[j1]&#xff0c;则将连个数进行交换&#xff0c; 2. j, 重复以上步骤&#xff0c;第一趟结束后&#xff0c;最大数就会被确定在最后一位&#xff0c;这就是冒泡排序又称…...

廊坊做网站哪家好/必应搜索引擎地址

一 背景 最近在园子了浏览了几篇有关Socket文章&#xff0c;得到了一些启发萌生了想要重构公司在2000年用.NET Framework 2.0 与 Visual Studio 2005开发的AsySocket项目为了希望能够尽快的了解公司这个项目&#xff0c;Google了很多国内外的网站让我对Socket有了更深层次的了解…...