当前位置: 首页 > news >正文

DirectShow过滤器开发-读视频文件过滤器(再写)

下载本过滤器DLL
本过滤器读取视频文件输出视频流和音频流。流类型由文件决定。已知可读取的文件格式有:AVI,ASF,MOV,MP4,MPG,WMV。

过滤器信息

过滤器名称:读视频文件
过滤器GUID:{29001AD7-37A5-45E0-A750-E76453B36E33}
DLL注册函数名:DllRegisterServer
删除注册函数名:DllUnregisterServer
过滤器有2个输出引脚。

输出引脚1标识:Video
输出引脚1媒体类型:
主要类型:MEDIATYPE_Video
子类型:MEDIASUBTYPE_NULL

输出引脚2标识:Audio
输出引脚2媒体类型:
主要类型:MEDIATYPE_Audio
子类型:MEDIASUBTYPE_NULL

过滤器开发信息

过滤器实现了IFileSourceFilter接口,用于指定要读取的视频文件。并在接口的Load方法中确定文件视频流,音频流媒体类型,设置给视频音频输出引脚,并获取文件的时长。过滤器运行时,创建媒体源线程,在线程中创建媒体源,并获取媒体源事件。媒体源启动后(调用媒体源的Start方法),将产生“创建了新流”事件,事件值为视频或音频流接口。获取流接口后创建视频音频工作线程(视频音频流线程)。在线程中获取流事件。请求样本后,会产生“产生新样本”事件,事件值为样本接口,其为媒体基础样本,将其转换为引脚样本,由引脚向下游发送。再请求下一个样本。过滤器停止时,调用媒体源接口的Stop方法,会产生“流停止”事件,收到此事件后,退出视频音频工作线程,但媒体源线程仍未退出。再次运行过滤器时,再次启动媒体源,此时媒体源产生“流已更新”事件,事件值为已更新的流接口。获取流接口后创建视频音频工作线程(视频音频流线程)。过滤器更改播放位置时(即定位,调用媒体源的Start方法,在参数中指定新的开始位置),媒体源也会产生“流已更新”事件,此时只更新流接口,不再创建新的视频音频工作线程,因为线程已存在。媒体源会自动寻找关键帧,从关键帧开始。要退出媒体源线程,通过销毁过滤器或指定新的要读取的文件,方式为发送“退出媒体源线程”事件信号。
IMediaSeeking接口用于调整播放的当前位置(定位)。可以在过滤器实现,也可以在任一引脚实现。如果在引脚实现,最好在音频引脚。

过滤器DLL的全部代码

DLL.h

#ifndef  DLL_FILE
#define DLL_FILE#include "strmbase10.h"//过滤器基类定义文件#if _DEBUG
#pragma comment(lib, "strmbasd10.lib")//过滤器基类实现文件调试版本
#else
#pragma comment(lib, "strmbase10.lib")//过滤器基类实现文件发布版本
#endif// {29001AD7-37A5-45E0-A750-E76453B36E33}
DEFINE_GUID(CLSID_Reader,//过滤器GUID0x29001ad7, 0x37a5, 0x45e0, 0xa7, 0x50, 0xe7, 0x64, 0x53, 0xb3, 0x6e, 0x33);#include "mfapi.h"
#include "mfidl.h"
#include "mferror.h"
#include "evr.h"
#pragma comment(lib, "mfplat.lib")
#pragma comment(lib, "mf.lib")class CPin2;
class CFilter;class CPin1 : public CBaseOutputPin
{friend class CPin2;friend class CFilter;
public:CPin1(CFilter *pFilter, HRESULT *phr, LPCWSTR pPinName);~CPin1();DECLARE_IUNKNOWNSTDMETHODIMP NonDelegatingQueryInterface(REFIID riid, void **ppvoid);BOOL HasSet = FALSE;HRESULT CheckMediaType(const CMediaType *pmt);HRESULT GetMediaType(int iPosition, CMediaType *pMediaType);HRESULT SetMediaType(const CMediaType *pmt);HRESULT BreakConnect();HRESULT DecideBufferSize(IMemAllocator *pMemAllocator, ALLOCATOR_PROPERTIES * ppropInputRequest);CFilter *pCFilter = NULL;STDMETHODIMP Notify(IBaseFilter *pSelf, Quality q){return E_FAIL;}
};class CPin2 : public CBaseOutputPin, public IMediaSeeking
{friend class CPin1;friend class CFilter;
public:CPin2(CFilter *pFilter, HRESULT *phr, LPCWSTR pPinName);~CPin2();DECLARE_IUNKNOWNSTDMETHODIMP NonDelegatingQueryInterface(REFIID riid, void **ppvoid);BOOL HasSet = FALSE;HRESULT CheckMediaType(const CMediaType *pmt);HRESULT GetMediaType(int iPosition, CMediaType *pMediaType);HRESULT SetMediaType(const CMediaType *pmt);HRESULT BreakConnect();HRESULT DecideBufferSize(IMemAllocator *pMemAllocator, ALLOCATOR_PROPERTIES * ppropInputRequest);CFilter *pCFilter = NULL;DWORD m_dwSeekingCaps = AM_SEEKING_CanSeekForwards | AM_SEEKING_CanSeekBackwards | AM_SEEKING_CanSeekAbsolute | AM_SEEKING_CanGetStopPos | AM_SEEKING_CanGetDuration;HRESULT STDMETHODCALLTYPE CheckCapabilities(DWORD *pCapabilities);HRESULT STDMETHODCALLTYPE ConvertTimeFormat(LONGLONG *pTarget, const GUID *pTargetFormat, LONGLONG Source, const GUID *pSourceFormat);HRESULT STDMETHODCALLTYPE GetAvailable(LONGLONG *pEarliest, LONGLONG *pLatest);HRESULT STDMETHODCALLTYPE GetCapabilities(DWORD *pCapabilities);HRESULT STDMETHODCALLTYPE GetCurrentPosition(LONGLONG *pCurrent);HRESULT STDMETHODCALLTYPE GetDuration(LONGLONG *pDuration);HRESULT STDMETHODCALLTYPE GetPositions(LONGLONG *pCurrent, LONGLONG *pStop);HRESULT STDMETHODCALLTYPE GetPreroll(LONGLONG *pllPreroll);HRESULT STDMETHODCALLTYPE GetRate(double *pdRate);HRESULT STDMETHODCALLTYPE GetStopPosition(LONGLONG *pStop);HRESULT STDMETHODCALLTYPE GetTimeFormat(GUID *pFormat);HRESULT STDMETHODCALLTYPE IsFormatSupported(const GUID *pFormat);HRESULT STDMETHODCALLTYPE IsUsingTimeFormat(const GUID *pFormat);HRESULT STDMETHODCALLTYPE QueryPreferredFormat(GUID *pFormat);HRESULT STDMETHODCALLTYPE SetPositions(LONGLONG *pCurrent, DWORD dwCurrentFlags, LONGLONG *pStop, DWORD dwStopFlags);HRESULT STDMETHODCALLTYPE SetRate(double dRate);HRESULT STDMETHODCALLTYPE SetTimeFormat(const GUID *pFormat);STDMETHODIMP Notify(IBaseFilter *pSelf, Quality q){return E_FAIL;}
};class CFilter : public CCritSec, public CBaseFilter, public IFileSourceFilter
{friend class CPin1;friend class CPin2;
public:CFilter(TCHAR* pName, LPUNKNOWN pUnk, HRESULT* hr);~CFilter();CBasePin* GetPin(int n);int GetPinCount();static CUnknown * WINAPI CreateInstance(LPUNKNOWN pUnk, HRESULT *phr);DECLARE_IUNKNOWNSTDMETHODIMP NonDelegatingQueryInterface(REFIID iid, void ** ppv);STDMETHODIMP Load(LPCOLESTR lpwszFileName, const AM_MEDIA_TYPE *pmt);STDMETHODIMP GetCurFile(LPOLESTR * ppszFileName, AM_MEDIA_TYPE *pmt);HRESULT GetMediaType();//获取视频音频流媒体类型REFERENCE_TIME mStart = 0;HANDLE hSourceThread = NULL;//媒体源线程句柄HANDLE hVideoThread = NULL;//视频流线程句柄HANDLE hAudioThread = NULL;//音频流线程句柄HANDLE hExit = NULL;// “退出媒体源线程”事件句柄HANDLE hInit = NULL;//“媒体源已创建”事件句柄STDMETHODIMP Pause();STDMETHODIMP Stop();CPin1* pCPin1 = NULL;//视频引脚指针CPin2* pCPin2 = NULL;//音频引脚指针WCHAR* m_pFileName = NULL;//要读取的视频文件路径LONGLONG DUR = 0;//持续时间,100纳秒单位LONGLONG CUR = 0;//当前位置,单位100纳秒IMFMediaType* pVideoType = NULL;//视频媒体类型IMFMediaType* pAudioType = NULL;//音频媒体类型IMFMediaSource *pIMFMediaSource = NULL;//媒体源接口IMFPresentationDescriptor* pISourceD = NULL;//演示文稿描述符IMFMediaStream* pVideoStream = NULL;//视频流接口IMFMediaStream* pAudioStream = NULL;//音频流接口
};template <class T> void SafeRelease(T** ppT)
{if (*ppT){(*ppT)->Release();*ppT = NULL;}
}#endif //DLL_FILE

DLL.cpp

#include "DLL.h"const AMOVIESETUP_MEDIATYPE Pin1Type =   //引脚1媒体类型
{&MEDIATYPE_Video,                    //主要类型&MEDIASUBTYPE_NULL                   //子类型
};const AMOVIESETUP_MEDIATYPE Pin2Type =   //引脚2媒体类型
{&MEDIATYPE_Audio,                    //主要类型&MEDIASUBTYPE_NULL                   //子类型
};const AMOVIESETUP_PIN sudPins[] =       //引脚信息
{{L"Video",                       //引脚名称FALSE,                          //渲染引脚TRUE,                           //输出引脚FALSE,                          //具有该引脚的零个实例FALSE,                          //可以创建一个以上引脚的实例&CLSID_NULL,                    //该引脚连接的过滤器的类标识NULL,                           //该引脚连接的引脚名称1,                              //引脚支持的媒体类型数&Pin1Type                       //媒体类型信息},{L"Audio",                       //引脚名称FALSE,                          //渲染引脚TRUE,                           //输出引脚FALSE,                          //具有该引脚的零个实例FALSE,                         //可以创建一个以上引脚的实例&CLSID_NULL,                   //该引脚连接的过滤器的类标识NULL,                          //该引脚连接的引脚名称1,                             //引脚支持的媒体类型数&Pin2Type                      //媒体类型信息}
};const AMOVIESETUP_FILTER Reader =  //过滤器的注册信息
{&CLSID_Reader,                 //过滤器的类标识L"读视频文件",                  //过滤器的名称MERIT_DO_NOT_USE,              //过滤器优先值2,                             //引脚数量sudPins                        //引脚信息
};CFactoryTemplate g_Templates[] =   //类工厂模板数组
{{L"读视频文件",             //对象(这里为过滤器)名称&CLSID_Reader,            //对象CLSID的指针CFilter::CreateInstance,  //创建对象实例的函数的指针NULL,                     //指向从DLL入口点调用的函数的指针&Reader                   //指向AMOVIESETUP_FILTER结构的指针}
};int g_cTemplates = 1;//模板数组大小STDAPI DllRegisterServer()//注册DLL
{return AMovieDllRegisterServer2(TRUE);
}STDAPI DllUnregisterServer()//删除DLL注册
{return AMovieDllRegisterServer2(FALSE);
}extern "C" BOOL WINAPI DllEntryPoint(HINSTANCE, ULONG, LPVOID);BOOL APIENTRY DllMain(HANDLE hModule, DWORD  dwReason, LPVOID lpReserved)
{return DllEntryPoint((HINSTANCE)(hModule), dwReason, lpReserved);
}

CFilter.cpp

#include "DLL.h"
#include "strsafe.h"DWORD WINAPI  MediaSourceThread(LPVOID pParam);
DWORD WINAPI  VideoThread(LPVOID pParam);
DWORD WINAPI  AudioThread(LPVOID pParam);CFilter::CFilter(TCHAR *pName, LPUNKNOWN pUnk, HRESULT *phr) : CBaseFilter(NAME("读视频文件"), pUnk, this, CLSID_Reader)
{HRESULT hr = MFStartup(MF_VERSION);//初始化媒体基础if (hr != S_OK){MessageBox(NULL, L"初始化媒体基础失败", L"读视频文件", MB_OK); return;}pCPin1 = new CPin1(this, phr, L"Video");//创建视频输出引脚pCPin2 = new CPin2(this, phr, L"Audio");//创建音频输出引脚hExit = CreateEvent(NULL, FALSE, FALSE, NULL);//自动重置,初始无信号hInit = CreateEvent(NULL, FALSE, FALSE, NULL);//自动重置,初始无信号
}CFilter::~CFilter()
{SetEvent(hExit);//发送“退出媒体源线程”信号SafeRelease(&pVideoType); SafeRelease(&pAudioType);//释放媒体类型if (m_pFileName)delete[] m_pFileName;MFShutdown();//关闭媒体基础CloseHandle(hSourceThread); CloseHandle(hVideoThread); CloseHandle(hAudioThread); CloseHandle(hExit); CloseHandle(hInit);
}CBasePin *CFilter::GetPin(int n)
{if (n == 0)return pCPin1;if (n == 1)return pCPin2;return NULL;
}int CFilter::GetPinCount()
{return 2;
}CUnknown * WINAPI CFilter::CreateInstance(LPUNKNOWN pUnk, HRESULT *phr)
{return new CFilter(NAME("读视频文件"), pUnk, phr);
}STDMETHODIMP CFilter::NonDelegatingQueryInterface(REFIID iid, void ** ppv)
{if (iid == IID_IFileSourceFilter){return GetInterface(static_cast<IFileSourceFilter*>(this), ppv);}elsereturn CBaseFilter::NonDelegatingQueryInterface(iid, ppv);
}STDMETHODIMP CFilter::Load(LPCOLESTR lpwszFileName, const AM_MEDIA_TYPE *pmt)
{CheckPointer(lpwszFileName, E_POINTER);if (wcslen(lpwszFileName) > MAX_PATH || wcslen(lpwszFileName)<4)return ERROR_FILENAME_EXCED_RANGE;size_t len = 1 + lstrlenW(lpwszFileName);if (m_pFileName != NULL)delete[] m_pFileName;m_pFileName = new WCHAR[len];if (m_pFileName == NULL)return E_OUTOFMEMORY;HRESULT hr = StringCchCopyW(m_pFileName, len, lpwszFileName);hr = GetMediaType();//获取视频音频流媒体类型if (hr != S_OK)//如果获取媒体类型失败{delete[] m_pFileName; m_pFileName = NULL;return VFW_E_INVALID_FILE_FORMAT;//设置文件名失败}return S_OK;
}STDMETHODIMP CFilter::GetCurFile(LPOLESTR * ppszFileName, AM_MEDIA_TYPE *pmt)
{CheckPointer(ppszFileName, E_POINTER);*ppszFileName = NULL;if (m_pFileName != NULL){DWORD n = sizeof(WCHAR)*(1 + lstrlenW(m_pFileName));*ppszFileName = (LPOLESTR)CoTaskMemAlloc(n);if (*ppszFileName != NULL){CopyMemory(*ppszFileName, m_pFileName, n); return S_OK;}}return S_FALSE;
}HRESULT CFilter::GetMediaType()//获取视频音频流媒体类型
{DWORD dw = WaitForSingleObject(hSourceThread, 0);if (dw == WAIT_TIMEOUT)//如果媒体源线程正在运行{SetEvent(hExit);//发送“退出媒体源线程”信号WaitForSingleObject(hSourceThread, INFINITE);//等待媒体源线程退出}SafeRelease(&pVideoType); SafeRelease(&pAudioType);//释放媒体类型IMFPresentationDescriptor* pSourceD = NULL;//演示文稿描述符IMFMediaSource *pMFMediaSource = NULL;//媒体源接口IMFStreamDescriptor* pStreamD1 = NULL;//流1描述符IMFStreamDescriptor* pStreamD2 = NULL;//流2描述符IMFMediaTypeHandler* pHandle1 = NULL;//流1媒体类型处理器IMFMediaTypeHandler* pHandle2 = NULL;//流2媒体类型处理器IMFSourceResolver* pSourceResolver = NULL;//源解析器HRESULT hr = MFCreateSourceResolver(&pSourceResolver);//创建源解析器MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;IUnknown* pSource = NULL;if (SUCCEEDED(hr)){hr = pSourceResolver->CreateObjectFromURL(//从URL创建媒体源m_pFileName,                       //源的URLMF_RESOLUTION_MEDIASOURCE,  //创建源对象NULL,                      //可选属性存储&ObjectType,        //接收创建的对象类型&pSource            //接收指向媒体源的指针);}SafeRelease(&pSourceResolver);//释放源解析器if (SUCCEEDED(hr)){hr = pSource->QueryInterface(IID_PPV_ARGS(&pMFMediaSource));//获取媒体源接口}SafeRelease(&pSource);//释放IUnknown接口if (SUCCEEDED(hr)){hr = pMFMediaSource->CreatePresentationDescriptor(&pSourceD);//获取演示文稿描述符}if (SUCCEEDED(hr)){hr = pSourceD->GetUINT64(MF_PD_DURATION, (UINT64*)&DUR);//获取文件流的时长,100纳秒为单位}BOOL  Selected;if (SUCCEEDED(hr)){hr = pSourceD->GetStreamDescriptorByIndex(0, &Selected, &pStreamD1);//获取流1描述符}if (SUCCEEDED(hr)){hr = pStreamD1->GetMediaTypeHandler(&pHandle1);//获取流1媒体类型处理器}SafeRelease(&pStreamD1);//释放流1描述符GUID guid;if (SUCCEEDED(hr)){hr = pHandle1->GetMajorType(&guid);//获取流1主要类型}if (SUCCEEDED(hr)){if (guid == MEDIATYPE_Video)//如果是视频{hr = pHandle1->GetCurrentMediaType(&pVideoType);//获取视频流的媒体类型}else if (guid == MEDIATYPE_Audio)//如果是音频{hr = pHandle1->GetCurrentMediaType(&pAudioType);//获取音频流的媒体类型}}SafeRelease(&pHandle1);//释放流1媒体类型处理器if (SUCCEEDED(hr)){hr = pSourceD->GetStreamDescriptorByIndex(1, &Selected, &pStreamD2);//获取流2描述符}if (SUCCEEDED(hr)){hr = pStreamD2->GetMediaTypeHandler(&pHandle2);//获取流2媒体类型处理器}SafeRelease(&pStreamD2);//释放流2描述符if (SUCCEEDED(hr)){hr = pHandle2->GetMajorType(&guid);//获取流2主要类型}if (SUCCEEDED(hr)){if (guid == MEDIATYPE_Video)//如果是视频{hr = pHandle2->GetCurrentMediaType(&pVideoType);//获取视频流的媒体类型}else if (guid == MEDIATYPE_Audio)//如果是音频{hr = pHandle2->GetCurrentMediaType(&pAudioType);//获取音频流的媒体类型}}SafeRelease(&pHandle2);//释放流2媒体类型处理器SafeRelease(&pSourceD);//释放演示文稿描述符SafeRelease(&pMFMediaSource);//释放媒体源接口return hr;
}STDMETHODIMP CFilter::Pause()
{if (m_State == State_Stopped){DWORD dw = WaitForSingleObject(hSourceThread, 0);if (dw == WAIT_FAILED || dw == WAIT_OBJECT_0)//如果媒体源线程没有创建或已退出{hSourceThread = CreateThread(NULL, 0, MediaSourceThread, this, 0, NULL);//创建媒体源线程WaitForSingleObject(hInit, INFINITE);//等待媒体源创建成功}mStart = 0;PROPVARIANT var;PropVariantInit(&var);var.vt = VT_I8;var.hVal.QuadPart = mStart;HRESULT hr = pIMFMediaSource->Start(pISourceD, NULL, &var);//启动媒体源PropVariantClear(&var);}return CBaseFilter::Pause();
}STDMETHODIMP CFilter::Stop()
{HRESULT hr = pIMFMediaSource->Stop();return CBaseFilter::Stop();
}DWORD WINAPI  MediaSourceThread(LPVOID pParam)//媒体源线程
{CFilter* pCFilter = (CFilter*)pParam;HRESULT hr, hr1, hr2;IMFMediaStream* pIMFMediaStream = NULL;IMFStreamDescriptor* pIMFStreamDescriptor = NULL;IMFMediaTypeHandler* pHandler = NULL;pCFilter->pVideoStream = NULL;//视频流接口pCFilter->pAudioStream = NULL;//音频流接口IMFSourceResolver* pSourceResolver = NULL;//源解析器hr = MFCreateSourceResolver(&pSourceResolver);//创建源解析器MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;IUnknown* pSource = NULL;if (SUCCEEDED(hr)){hr = pSourceResolver->CreateObjectFromURL(//从URL创建媒体源pCFilter->m_pFileName,                       //源的URLMF_RESOLUTION_MEDIASOURCE,  //创建源对象NULL,                      //可选属性存储&ObjectType,        //接收创建的对象类型&pSource            //接收指向媒体源的指针);}SafeRelease(&pSourceResolver);//释放源解析器if (SUCCEEDED(hr)){hr = pSource->QueryInterface(IID_PPV_ARGS(&pCFilter->pIMFMediaSource));//获取媒体源接口}SafeRelease(&pSource);//释放IUnknown接口if (SUCCEEDED(hr)){hr = pCFilter->pIMFMediaSource->CreatePresentationDescriptor(&pCFilter->pISourceD);//获取演示文稿描述符}if (hr != S_OK){SafeRelease(&pCFilter->pIMFMediaSource);//释放媒体源接口return hr;}SetEvent(pCFilter->hInit);//发送“媒体源已创建”信号Agan:GUID guid;IMFMediaEvent* pSourceEvent = NULL;hr1 = pCFilter->pIMFMediaSource->GetEvent(MF_EVENT_FLAG_NO_WAIT, &pSourceEvent);//获取媒体源事件,不等待DWORD dw = WaitForSingleObject(pCFilter->hExit, 0);//检测“退出媒体源线程”信号if (dw == WAIT_OBJECT_0)//如果有“退出媒体源线程”信号{hr = pCFilter->pIMFMediaSource->Stop();pCFilter->pIMFMediaSource->Shutdown();SafeRelease(&pCFilter->pISourceD);//释放演示文稿描述符SafeRelease(&pCFilter->pIMFMediaSource);//释放媒体源return 1;//退出线程}if (SUCCEEDED(hr1))//如果获取媒体源事件成功{MediaEventType MET;hr2 = pSourceEvent->GetType(&MET);//获取媒体源事件类型if (SUCCEEDED(hr2)){PROPVARIANT vr;PropVariantInit(&vr);hr = pSourceEvent->GetValue(&vr);//获取事件值switch (MET){case MENewStream://如果是“创建了新流”事件hr = vr.punkVal->QueryInterface(&pIMFMediaStream);//获取流接口vr.punkVal->Release();//释放IUnknown接口hr = pIMFMediaStream->GetStreamDescriptor(&pIMFStreamDescriptor);//获取流描述符hr = pIMFStreamDescriptor->GetMediaTypeHandler(&pHandler);//获取媒体类型处理器SafeRelease(&pIMFStreamDescriptor);//释放流描述符hr = pHandler->GetMajorType(&guid);//获取主要类型SafeRelease(&pHandler);//释放类型处理器if (guid == MEDIATYPE_Video)//如果是视频流{pCFilter->pVideoStream = pIMFMediaStream;//获取视频流hr = pCFilter->pVideoStream->RequestSample(NULL);//视频流请求样本第1个样本pCFilter->hVideoThread = CreateThread(NULL, 0, VideoThread, pCFilter, 0, NULL);//创建视频工作线程}if (guid == MEDIATYPE_Audio)//如果是音频流{pCFilter->pAudioStream = pIMFMediaStream;//获取音频流hr = pCFilter->pAudioStream->RequestSample(NULL);//音频流请求样本第1个样本pCFilter->hAudioThread = CreateThread(NULL, 0, AudioThread, pCFilter, 0, NULL);//创建音频工作线程}break;case MEUpdatedStream://如果是“流已更新”(定位或重启时发送)hr = vr.punkVal->QueryInterface(&pIMFMediaStream);//获取流接口vr.punkVal->Release();//释放IUnknown接口hr = pIMFMediaStream->GetStreamDescriptor(&pIMFStreamDescriptor);//获取流描述符hr = pIMFStreamDescriptor->GetMediaTypeHandler(&pHandler);//获取媒体类型处理器SafeRelease(&pIMFStreamDescriptor);//释放流描述符hr = pHandler->GetMajorType(&guid);//获取主要类型SafeRelease(&pHandler);//释放类型处理器if (guid == MEDIATYPE_Video)//如果是视频流{pCFilter->pVideoStream = pIMFMediaStream;//获取视频流hr = pCFilter->pVideoStream->RequestSample(NULL);//视频流请求第1个样本DWORD dw = WaitForSingleObject(pCFilter->hVideoThread, 0);if (dw == WAIT_OBJECT_0)//如果视频工作线程已退出。防止在SEEK时,创建新线程{pCFilter->hVideoThread = CreateThread(NULL, 0, VideoThread, pCFilter, 0, NULL);//创建视频工作线程}}if (guid == MEDIATYPE_Audio)//如果是音频流{pCFilter->pAudioStream = pIMFMediaStream;//获取音频流hr = pCFilter->pAudioStream->RequestSample(NULL);//音频流请求第1个样本DWORD dw = WaitForSingleObject(pCFilter->hAudioThread, 0);if (dw == WAIT_OBJECT_0)//如果音频工作线程已退出。防止在SEEK时,创建新线程{pCFilter->hAudioThread = CreateThread(NULL, 0, AudioThread, pCFilter, 0, NULL);//创建音频工作线程}}break;}PropVariantClear(&vr);}SafeRelease(&pSourceEvent);//释放媒体源事件}goto Agan;
}DWORD WINAPI  VideoThread(LPVOID pParam)//视频工作线程
{CFilter* pCFilter = (CFilter*)pParam;HRESULT hr;hr = pCFilter->pCPin1->DeliverBeginFlush();Sleep(200);hr = pCFilter->pCPin1->DeliverEndFlush();hr = pCFilter->pCPin1->DeliverNewSegment(0, pCFilter->DUR - pCFilter->mStart, 1.0);
Agan:HRESULT hr1, hr2;IMFSample* pIMFSample = NULL;IMFMediaEvent* pStreamEvent = NULL;hr1 = pCFilter->pVideoStream->GetEvent(0, &pStreamEvent);//获取媒体流事件,无限期等待if (hr1 == S_OK){MediaEventType meType = MEUnknown;hr2 = pStreamEvent->GetType(&meType);//获取事件类型if (hr2 == S_OK){PROPVARIANT var;PropVariantInit(&var);hr = pStreamEvent->GetValue(&var);//获取事件值switch (meType){case MEMediaSample://如果是“产生新样本”事件hr = var.punkVal->QueryInterface(&pIMFSample);//获取样本接口if (hr == S_OK){HRESULT hrA, hrB;UINT32 CleanPoint;hrA = pIMFSample->GetUINT32(MFSampleExtension_CleanPoint, &CleanPoint);//是否为关键帧UINT32 Discontinuity;hrB = pIMFSample->GetUINT32(MFSampleExtension_Discontinuity, &Discontinuity);//是否包含中断标志LONGLONG star, dur;hr = pIMFSample->GetSampleTime(&star);hr = pIMFSample->GetSampleDuration(&dur);DWORD len;hr = pIMFSample->GetTotalLength(&len);DWORD Count;hr = pIMFSample->GetBufferCount(&Count);IMFMediaBuffer* pBuffer = NULL;if (Count == 1)hr = pIMFSample->GetBufferByIndex(0, &pBuffer);elsehr = pIMFSample->ConvertToContiguousBuffer(&pBuffer);//将具有多个缓冲区的样本转换为具有单个缓冲区的样本BYTE* pData = NULL; DWORD MLen, CLen;hr = pBuffer->Lock(&pData, &MLen, &CLen);IMediaSample *pOutSample = NULL;hr = pCFilter->pCPin1->GetDeliveryBuffer(&pOutSample, NULL, NULL, 0);//获取一个空的输出引脚样本if (hr == S_OK){BYTE* pOutBuffer = NULL;hr = pOutSample->GetPointer(&pOutBuffer);//获取输出引脚样本缓冲区指针if (pOutBuffer && pData && len <= 10000000)CopyMemory(pOutBuffer, pData, len);LONGLONG STAR = star - pCFilter->mStart, END = STAR + dur;hr = pOutSample->SetTime(&STAR, &END);//设置输出引脚样本时间戳hr = pOutSample->SetActualDataLength(len);//设置输出引脚样本有效数据长度if (hrA == S_OK){if (CleanPoint)//如果是关键帧hr = pOutSample->SetSyncPoint(TRUE);//设置同步点标志elsehr = pOutSample->SetSyncPoint(FALSE);}if (hrB == S_OK){if (Discontinuity)//如果有中断标志{hr = pOutSample->SetDiscontinuity(TRUE);//设置中断标志}elsehr = pOutSample->SetDiscontinuity(FALSE);}hr = pCFilter->pCPin1->Deliver(pOutSample);//输出引脚向下游发送样本pOutSample->Release();//释放输出引脚样本}pBuffer->Unlock(); SafeRelease(&pBuffer); SafeRelease(&pIMFSample);hr = pCFilter->pVideoStream->RequestSample(NULL);//请求下一个样本}break;case MEEndOfStream://如果是“流结束”事件hr = pCFilter->pCPin1->DeliverEndOfStream();break;case MEStreamStopped://如果是“流停止”事件PropVariantClear(&var);SafeRelease(&pStreamEvent);return 1;//终止视频工作线程}PropVariantClear(&var);}SafeRelease(&pStreamEvent);}goto Agan;
}DWORD WINAPI  AudioThread(LPVOID pParam)//音频工作线程
{CFilter* pCFilter = (CFilter*)pParam;HRESULT hr, hrA, hrB;hr = pCFilter->pCPin2->DeliverBeginFlush();Sleep(100);hr = pCFilter->pCPin2->DeliverEndFlush();hr = pCFilter->pCPin2->DeliverNewSegment(0, pCFilter->DUR - pCFilter->mStart, 1.0);
Agan:HRESULT hr1, hr2;IMFSample* pIMFSample = NULL;IMFMediaEvent* pStreamEvent = NULL;hr1 = pCFilter->pAudioStream->GetEvent(0, &pStreamEvent);//获取媒体流事件,无限期等待if (hr1 == S_OK){MediaEventType meType = MEUnknown;hr2 = pStreamEvent->GetType(&meType);//获取事件类型if (hr2 == S_OK){PROPVARIANT var;PropVariantInit(&var);hr = pStreamEvent->GetValue(&var);//获取事件值switch (meType){case MEMediaSample://如果是“产生新样本”事件hr = var.punkVal->QueryInterface(&pIMFSample);//获取样本接口if (hr == S_OK){UINT32 CleanPoint;hrA = pIMFSample->GetUINT32(MFSampleExtension_CleanPoint, &CleanPoint);//是否为关键帧UINT32 Discontinuity;hrB = pIMFSample->GetUINT32(MFSampleExtension_Discontinuity, &Discontinuity);//是否包含中断标志LONGLONG star, dur;hr = pIMFSample->GetSampleTime(&star);pCFilter->CUR = star;//音频当前位置hr = pIMFSample->GetSampleDuration(&dur);DWORD Count;hr = pIMFSample->GetBufferCount(&Count);IMFMediaBuffer* pBuffer = NULL;if (Count == 1)hr = pIMFSample->GetBufferByIndex(0, &pBuffer);elsehr = pIMFSample->ConvertToContiguousBuffer(&pBuffer);//将具有多个缓冲区的样本转换为具有单个缓冲区的样本BYTE* pData = NULL;hr = pBuffer->Lock(&pData, NULL, NULL);DWORD len;hr = pIMFSample->GetTotalLength(&len);IMediaSample *pOutSample = NULL;hr = pCFilter->pCPin2->GetDeliveryBuffer(&pOutSample, NULL, NULL, 0);//获取一个空的输出引脚样本if (hr == S_OK){BYTE* pOutBuffer = NULL;hr = pOutSample->GetPointer(&pOutBuffer);//获取输出引脚样本缓冲区指针if (pOutBuffer && pData && len <= 1000000)CopyMemory(pOutBuffer, pData, len);LONGLONG STAR = star - pCFilter->mStart, END = STAR + dur;hr = pOutSample->SetTime(&STAR, &END);//设置输出引脚样本时间戳hr = pOutSample->SetActualDataLength(len);//设置输出引脚样本有效数据长度if (hrA == S_OK){if (CleanPoint)//如果是关键帧hr = pOutSample->SetSyncPoint(TRUE);//设置同步点标志elsehr = pOutSample->SetSyncPoint(FALSE);}if (hrB == S_OK){if (Discontinuity)//如果有中断标志{hr = pOutSample->SetDiscontinuity(TRUE);//设置中断标志}elsehr = pOutSample->SetDiscontinuity(FALSE);}hr = pCFilter->pCPin2->Deliver(pOutSample);//输出引脚向下游发送样本pOutSample->Release();//释放输出引脚样本}pBuffer->Unlock(); SafeRelease(&pBuffer); SafeRelease(&pIMFSample);hr = pCFilter->pAudioStream->RequestSample(NULL);//请求下一个样本}break;case MEEndOfStream://如果是“流结束”事件hr = pCFilter->pCPin2->DeliverEndOfStream();break;case MEStreamStopped://如果是“流停止”事件PropVariantClear(&var);SafeRelease(&pStreamEvent);return 1;//终止音频工作线程}PropVariantClear(&var);}SafeRelease(&pStreamEvent);}goto Agan;
}

CPin1.cpp

#include "DLL.h"CPin1::CPin1(CFilter *pFilter, HRESULT *phr, LPCWSTR pPinName) : CBaseOutputPin(NAME("Video"), pFilter, pFilter, phr, pPinName)
{pCFilter = pFilter;
}CPin1::~CPin1()
{}STDMETHODIMP CPin1::NonDelegatingQueryInterface(REFIID riid, void **ppv)
{if (riid == IID_IQualityControl){return CBasePin::NonDelegatingQueryInterface(riid, ppv);}elsereturn CBasePin::NonDelegatingQueryInterface(riid, ppv);
}HRESULT CPin1::CheckMediaType(const CMediaType *pmt)
{AM_MEDIA_TYPE* pMt = NULL;pCFilter->pVideoType->GetRepresentation(AM_MEDIA_TYPE_REPRESENTATION, (void**)&pMt);//将IMFMediaType表示的媒体类型,转换为AM_MEDIA_TYPE结构形式CMediaType MT(*pMt);if (pmt->MatchesPartial(&MT)){pCFilter->pVideoType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存return S_OK;}pCFilter->pVideoType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存return S_FALSE;
}HRESULT CPin1::GetMediaType(int iPosition, CMediaType *pmt)
{if (pCFilter->m_pFileName == NULL)return S_FALSE;if (iPosition == 0){AM_MEDIA_TYPE* pMt = NULL;pCFilter->pVideoType->GetRepresentation(AM_MEDIA_TYPE_REPRESENTATION, (void**)&pMt);//将IMFMediaType表示的媒体类型,转换为AM_MEDIA_TYPE结构形式pmt->Set(*pMt);pCFilter->pVideoType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存HasSet = TRUE;return S_OK;}return S_FALSE;
}HRESULT CPin1::SetMediaType(const CMediaType *pmt)
{if (HasSet == FALSE)//如果GetMediaType函数没有调用{GetMediaType(0, &m_mt);//设置引脚媒体类型return S_OK;}return CBasePin::SetMediaType(pmt);
}HRESULT CPin1::BreakConnect()
{HasSet = FALSE;return CBasePin::BreakConnect();
}HRESULT CPin1::DecideBufferSize(IMemAllocator *pMemAllocator, ALLOCATOR_PROPERTIES * ppropInputRequest)//确定输出引脚样本缓冲区大小
{HRESULT hr = S_OK;ppropInputRequest->cBuffers = 1;//1个缓冲区ppropInputRequest->cbBuffer = 10000000;//缓冲区的大小10MALLOCATOR_PROPERTIES Actual;hr = pMemAllocator->SetProperties(ppropInputRequest, &Actual);if (FAILED(hr))return hr;if (Actual.cbBuffer < ppropInputRequest->cbBuffer)// 这个分配器是否不合适{return E_FAIL;}ASSERT(Actual.cBuffers == 1);// 确保我们只有 1 个缓冲区return S_OK;
}

CPin2.cpp

#include "DLL.h"CPin2::CPin2(CFilter *pFilter, HRESULT *phr, LPCWSTR pPinName) : CBaseOutputPin(NAME("Audio"), pFilter, pFilter, phr, pPinName)
{pCFilter = pFilter;
}CPin2::~CPin2()
{}STDMETHODIMP CPin2::NonDelegatingQueryInterface(REFIID riid, void **ppv)
{if (riid == IID_IMediaSeeking){return GetInterface(static_cast<IMediaSeeking*>(this), ppv);}elsereturn CBaseOutputPin::NonDelegatingQueryInterface(riid, ppv);
}HRESULT CPin2::CheckMediaType(const CMediaType *pmt)
{AM_MEDIA_TYPE* pMt = NULL;pCFilter->pAudioType->GetRepresentation(AM_MEDIA_TYPE_REPRESENTATION, (void**)&pMt);//将IMFMediaType表示的媒体类型,转换为AM_MEDIA_TYPE结构形式CMediaType MT(*pMt);if (pmt->MatchesPartial(&MT)){pCFilter->pAudioType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存return S_OK;}pCFilter->pAudioType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存return S_FALSE;
}HRESULT CPin2::GetMediaType(int iPosition, CMediaType *pmt)
{if (pCFilter->m_pFileName == NULL)return S_FALSE;if (iPosition == 0){AM_MEDIA_TYPE* pMt = NULL;pCFilter->pAudioType->GetRepresentation(AM_MEDIA_TYPE_REPRESENTATION, (void**)&pMt);//将IMFMediaType表示的媒体类型,转换为AM_MEDIA_TYPE结构形式pmt->Set(*pMt);pCFilter->pAudioType->FreeRepresentation(AM_MEDIA_TYPE_REPRESENTATION, pMt);//释放GetRepresentation分配的内存HasSet = TRUE;return S_OK;}return S_FALSE;
}HRESULT CPin2::SetMediaType(const CMediaType *pmt)
{if (HasSet == FALSE)//如果GetMediaType函数没有调用{GetMediaType(0, &m_mt);//设置引脚媒体类型return S_OK;}return CBasePin::SetMediaType(pmt);
}HRESULT CPin2::BreakConnect()
{HasSet = FALSE;return CBasePin::BreakConnect();
}HRESULT CPin2::DecideBufferSize(IMemAllocator *pMemAllocator, ALLOCATOR_PROPERTIES * ppropInputRequest)//确定输出引脚样本缓冲区大小
{HRESULT hr = S_OK;ppropInputRequest->cBuffers = 1;//1个缓冲区ppropInputRequest->cbBuffer = 1000000;//缓冲区的大小1MALLOCATOR_PROPERTIES Actual;hr = pMemAllocator->SetProperties(ppropInputRequest, &Actual);if (FAILED(hr))return hr;if (Actual.cbBuffer < ppropInputRequest->cbBuffer)// 这个分配器是否不合适{return E_FAIL;}ASSERT(Actual.cBuffers == 1);// 确保我们只有 1 个缓冲区return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::CheckCapabilities(DWORD *pCapabilities)//查询流是否具有指定的Seek功能
{CheckPointer(pCapabilities, E_POINTER);return (~m_dwSeekingCaps & *pCapabilities) ? S_FALSE : S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::ConvertTimeFormat(LONGLONG *pTarget, const GUID *pTargetFormat, LONGLONG Source, const GUID *pSourceFormat)//从一种时间格式转换为另一种时间格式
{CheckPointer(pTarget, E_POINTER);if (pTargetFormat == 0 || *pTargetFormat == TIME_FORMAT_MEDIA_TIME){if (pSourceFormat == 0 || *pSourceFormat == TIME_FORMAT_MEDIA_TIME){*pTarget = Source;return S_OK;}}return E_INVALIDARG;
}HRESULT STDMETHODCALLTYPE CPin2::GetAvailable(LONGLONG *pEarliest, LONGLONG *pLatest)//获取有效Seek的时间范围
{if (pEarliest){*pEarliest = 0;}if (pLatest){CAutoLock lock(m_pLock);*pLatest = pCFilter->DUR;}return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetCapabilities(DWORD *pCapabilities)//检索流的所有Seek功能
{CheckPointer(pCapabilities, E_POINTER);*pCapabilities = m_dwSeekingCaps;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetCurrentPosition(LONGLONG *pCurrent)//获取相对于流总持续时间的当前位置
{*pCurrent = pCFilter->CUR;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetDuration(LONGLONG *pDuration)//获取流的持续时间
{CheckPointer(pDuration, E_POINTER);CAutoLock lock(m_pLock);*pDuration = pCFilter->DUR;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetPositions(LONGLONG *pCurrent, LONGLONG *pStop)//获取相对于流总持续时间的当前位置和停止位置
{CheckPointer(pCurrent, E_POINTER); CheckPointer(pStop, E_POINTER);*pCurrent = pCFilter->CUR; *pStop = pCFilter->DUR;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetPreroll(LONGLONG *pllPreroll)//获取将在开始位置之前排队的数据量
{CheckPointer(pllPreroll, E_POINTER);*pllPreroll = 0;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetRate(double *pdRate)//获取播放速率
{CheckPointer(pdRate, E_POINTER);CAutoLock lock(m_pLock);*pdRate = 1.0;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetStopPosition(LONGLONG *pStop)//获取相对于流的持续时间的停止播放时间
{CheckPointer(pStop, E_POINTER);CAutoLock lock(m_pLock);*pStop = pCFilter->DUR;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::GetTimeFormat(GUID *pFormat)//获取当前用于Seek操作的时间格式
{CheckPointer(pFormat, E_POINTER);*pFormat = TIME_FORMAT_MEDIA_TIME;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::IsFormatSupported(const GUID *pFormat)//确定Seek操作是否支持指定的时间格式
{CheckPointer(pFormat, E_POINTER);return *pFormat == TIME_FORMAT_MEDIA_TIME ? S_OK : S_FALSE;
}HRESULT STDMETHODCALLTYPE CPin2::IsUsingTimeFormat(const GUID *pFormat)//确定Seek操作当前是否使用指定的时间格式
{CheckPointer(pFormat, E_POINTER);return *pFormat == TIME_FORMAT_MEDIA_TIME ? S_OK : S_FALSE;
}HRESULT STDMETHODCALLTYPE CPin2::QueryPreferredFormat(GUID *pFormat)//获取首选的Seek时间格式
{CheckPointer(pFormat, E_POINTER);*pFormat = TIME_FORMAT_MEDIA_TIME;return S_OK;
}HRESULT STDMETHODCALLTYPE CPin2::SetPositions(LONGLONG *pCurrent, DWORD dwCurrentFlags, LONGLONG *pStop, DWORD dwStopFlags)//设置当前位置和停止位置
{CheckPointer(pCurrent, E_POINTER);DWORD dwCurrentPos = dwCurrentFlags & AM_SEEKING_PositioningBitsMask;if (dwCurrentPos == AM_SEEKING_AbsolutePositioning && *pCurrent >= 0 && *pCurrent <= pCFilter->DUR){HRESULT hr;FILTER_STATE fs;hr = pCFilter->GetState(0, &fs);if (fs != State_Stopped && *pCurrent && *pCurrent<pCFilter->DUR - 10000000){hr = pCFilter->pIMFMediaSource->Pause();//暂停媒体源hr = pCFilter->pCPin1->DeliverBeginFlush();hr = pCFilter->pCPin2->DeliverBeginFlush();Sleep(200);//确保下游解码器丢弃所有样本,如果下游解码器残留有样本,渲染器会等待其渲染时间(时间戳),视频将卡在这里hr = pCFilter->pCPin1->DeliverEndFlush();hr = pCFilter->pCPin2->DeliverEndFlush();hr = pCFilter->pCPin1->DeliverNewSegment(0, pCFilter->DUR - pCFilter->mStart, 1.0);hr = pCFilter->pCPin2->DeliverNewSegment(0, pCFilter->DUR - pCFilter->mStart, 1.0);pCFilter->mStart = *pCurrent;PROPVARIANT var;PropVariantInit(&var);var.vt = VT_I8;var.hVal.QuadPart = pCFilter->mStart;hr = pCFilter->pIMFMediaSource->Start(pCFilter->pISourceD, NULL, &var);//启动媒体源PropVariantClear(&var);return S_OK;}}return E_INVALIDARG;
}HRESULT STDMETHODCALLTYPE CPin2::SetRate(double dRate)//设置播放速率
{if (dRate == 1.0)return S_OK;else return S_FALSE;
}HRESULT STDMETHODCALLTYPE CPin2::SetTimeFormat(const GUID *pFormat)//设置后续Seek操作的时间格式
{CheckPointer(pFormat, E_POINTER);return *pFormat == TIME_FORMAT_MEDIA_TIME ? S_OK : E_INVALIDARG;
}

下载本过滤器DLL

相关文章:

DirectShow过滤器开发-读视频文件过滤器(再写)

下载本过滤器DLL 本过滤器读取视频文件输出视频流和音频流。流类型由文件决定。已知可读取的文件格式有&#xff1a;AVI&#xff0c;ASF&#xff0c;MOV&#xff0c;MP4&#xff0c;MPG&#xff0c;WMV。 过滤器信息 过滤器名称&#xff1a;读视频文件 过滤器GUID&#xff1a…...

代码练习2.3

终端输入10个学生成绩&#xff0c;使用冒泡排序对学生成绩从低到高排序 #include <stdio.h>void bubbleSort(int arr[], int n) {for (int i 0; i < n-1; i) {for (int j 0; j < n-i-1; j) {if (arr[j] > arr[j1]) {// 交换 arr[j] 和 arr[j1]int temp arr[…...

基于 Redis GEO 实现条件分页查询用户附近的场馆列表

&#x1f3af; 本文档详细介绍了如何使用Redis GEO模块实现场馆位置的存储与查询&#xff0c;以支持“附近场馆”搜索功能。首先&#xff0c;通过微信小程序获取用户当前位置&#xff0c;并将该位置信息与场馆的经纬度数据一同存储至Redis中。利用Redis GEO高效的地理空间索引能…...

【大数据技术】案例01:词频统计样例(hadoop+mapreduce+yarn)

词频统计(hadoop+mapreduce+yarn) 搭建完全分布式高可用大数据集群(VMware+CentOS+FinalShell) 搭建完全分布式高可用大数据集群(Hadoop+MapReduce+Yarn) 在阅读本文前,请确保已经阅读过以上两篇文章,成功搭建了Hadoop+MapReduce+Yarn的大数据集群环境。 写在前面 Wo…...

Selenium 使用指南:从入门到精通

Selenium 使用指南&#xff1a;从入门到精通 Selenium 是一个用于自动化 Web 浏览器操作的强大工具&#xff0c;广泛应用于自动化测试和 Web 数据爬取中。本文将带你从入门到精通地掌握 Selenium&#xff0c;涵盖其基本操作、常用用法以及一个完整的图片爬取示例。 1. 环境配…...

笔试-排列组合

应用 一个长度为[1, 50]、元素都是字符串的非空数组&#xff0c;每个字符串的长度为[1, 30]&#xff0c;代表非负整数&#xff0c;元素可以以“0”开头。例如&#xff1a;[“13”, “045”&#xff0c;“09”&#xff0c;“56”]。 将所有字符串排列组合&#xff0c;拼起来组成…...

Java序列化详解

1 什么是序列化、反序列化 在Java编程实践中&#xff0c;当我们需要持久化Java对象&#xff0c;比如把Java对象保存到文件里&#xff0c;或是在网络中传输Java对象时&#xff0c;序列化机制就发挥着关键作用。 序列化&#xff1a;指的是把数据结构或对象转变为可存储、可传输的…...

ChatGPT与GPT的区别与联系

ChatGPT 和 GPT 都是基于 Transformer 架构的语言模型&#xff0c;但它们有不同的侧重点和应用。下面我们来探讨一下它们的区别与联系。 1. GPT&#xff08;Generative Pre-trained Transformer&#xff09; GPT 是一类由 OpenAI 开发的语言模型&#xff0c;基于 Transformer…...

MySQL入门 – CRUD基本操作

MySQL入门 – CRUD基本操作 Essential CRUD Manipulation to MySQL Database By JacksonML 本文简要介绍操作MySQL数据库的基本操作&#xff0c;即创建(Create), 读取&#xff08;Read&#xff09;, 更新(Update)和删除&#xff08;Delete&#xff09;。 基于数据表的关系型…...

Redis背景介绍

⭐️前言⭐️ 本文主要做Redis相关背景介绍&#xff0c;包括核心能力、重要特性和使用场景。 &#x1f349;欢迎点赞 &#x1f44d; 收藏 ⭐留言评论 &#x1f349;博主将持续更新学习记录收获&#xff0c;友友们有任何问题可以在评论区留言 &#x1f349;博客中涉及源码及博主…...

PPT演示设置:插入音频同步切换播放时长计算

PPT中插入音频&同步切换&放时长计算 一、 插入音频及音频设置二、设置页面切换和音频同步三、播放时长计算 一、 插入音频及音频设置 1.插入音频&#xff1a;点击菜单栏插入-音频-选择PC上的音频&#xff08;已存在的音频&#xff09;或者录制音频&#xff08;现场录制…...

DIFY源码解析

偶然发现Github上某位大佬开源的DIFY源码注释和解析&#xff0c;目前还处于陆续不断更新地更新过程中&#xff0c;为大佬的专业和开源贡献精神点赞。先收藏链接&#xff0c;后续慢慢学习。 相关链接如下&#xff1a; DIFY源码解析...

[权限提升] Wdinwos 提权 维持 — 系统错误配置提权 - Trusted Service Paths 提权

关注这个专栏的其他相关笔记&#xff1a;[内网安全] 内网渗透 - 学习手册-CSDN博客 0x01&#xff1a;Trusted Service Paths 提权原理 Windows 的服务通常都是以 System 权限运行的&#xff0c;所以系统在解析服务的可执行文件路径中的空格的时候也会以 System 权限进行解析&a…...

【算法】回溯算法专题② ——组合型回溯 + 剪枝 python

目录 前置知识进入正题小试牛刀实战演练总结 前置知识 【算法】回溯算法专题① ——子集型回溯 python 进入正题 组合https://leetcode.cn/problems/combinations/submissions/596357179/ 给定两个整数 n 和 k&#xff0c;返回范围 [1, n] 中所有可能的 k 个数的组合。 你可以…...

LeetCode:121.买卖股票的最佳时机1

跟着carl学算法&#xff0c;本系列博客仅做个人记录&#xff0c;建议大家都去看carl本人的博客&#xff0c;写的真的很好的&#xff01; 代码随想录 LeetCode&#xff1a;121.买卖股票的最佳时机1 给定一个数组 prices &#xff0c;它的第 i 个元素 prices[i] 表示一支给定股票…...

pytorch生成对抗网络

人工智能例子汇总&#xff1a;AI常见的算法和例子-CSDN博客 生成对抗网络&#xff08;GAN&#xff0c;Generative Adversarial Network&#xff09;是一种深度学习模型&#xff0c;由两个神经网络组成&#xff1a;生成器&#xff08;Generator&#xff09;和判别器&#xff0…...

Visual Studio Code应用本地部署的deepseek

1.打开Visual Studio Code&#xff0c;在插件中搜索continue&#xff0c;安装插件。 2.添加新的大语言模型&#xff0c;我们选择ollama. 3.直接点connect&#xff0c;会链接本地下载好的deepseek模型。 参看上篇文章&#xff1a;deepseek本地部署-CSDN博客 4.输入需求生成可用…...

用 HTML、CSS 和 JavaScript 实现抽奖转盘效果

顺序抽奖 前言 这段代码实现了一个简单的抽奖转盘效果。页面上有一个九宫格布局的抽奖区域&#xff0c;周围八个格子分别放置了不同的奖品名称&#xff0c;中间是一个 “开始抽奖” 的按钮。点击按钮后&#xff0c;抽奖区域的格子会快速滚动&#xff0c;颜色不断变化&#xf…...

Skewer v0.2.2安装与使用-生信工具43

01 Skewer 介绍 Skewer&#xff08;来自于 SourceForge&#xff09;实现了一种基于位掩码的 k-差异匹配算法&#xff0c;专门用于接头修剪&#xff0c;特别设计用于处理下一代测序&#xff08;NGS&#xff09;双端序列。 fastp安装及使用-fastp v0.23.4&#xff08;bioinfoma…...

C语言:链表排序与插入的实现

好的!以下是一篇关于这段代码的博客文章: 从零开始:链表排序与插入的实现 在数据结构的学习中,链表是一种非常基础且重要的数据结构。今天,我们将通过一个简单的 C 语言程序,来探讨如何实现一个从小到大排序的链表,并在其中插入一个新的节点。这个过程不仅涉及链表的基…...

【Elasticsearch】doc_values 可以用于查询操作

确实&#xff0c;doc values 可以用于查询操作&#xff0c;尽管它们的主要用途是支持排序、聚合和脚本中的字段访问。在某些情况下&#xff0c;Elasticsearch 也会利用 doc values 来执行特定类型的查询。以下是关于 doc values 在查询操作中的使用及其影响的详细解释&#xff…...

深度学习深度解析:从基础到前沿

引言 深度学习作为人工智能的一个重要分支&#xff0c;通过模拟人脑的神经网络结构来进行数据分析和模式识别。它在图像识别、自然语言处理、语音识别等领域取得了显著成果。本文将深入探讨深度学习的基础知识、主要模型架构以及当前的研究热点和发展趋势。 基础概念与数学原理…...

JVM的GC详解

获取GC日志方式大抵有两种 第一种就是设定JVM参数在程序启动时查看&#xff0c;具体的命令参数为: -XX:PrintGCDetails # 打印GC日志 -XX:PrintGCTimeStamps # 打印每一次触发GC时发生的时间第二种则是在服务器上监控:使用jstat查看,如下所示&#xff0c;命令格式为jstat -gc…...

【开源免费】基于Vue和SpringBoot的校园网上店铺系统(附论文)

本文项目编号 T 187 &#xff0c;文末自助获取源码 \color{red}{T187&#xff0c;文末自助获取源码} T187&#xff0c;文末自助获取源码 目录 一、系统介绍二、数据库设计三、配套教程3.1 启动教程3.2 讲解视频3.3 二次开发教程 四、功能截图五、文案资料5.1 选题背景5.2 国内…...

测压表压力表计量表针头针尾检测数据集VOC+YOLO格式4862张4类别

数据集格式&#xff1a;Pascal VOC格式YOLO格式(不包含分割路径的txt文件&#xff0c;仅仅包含jpg图片以及对应的VOC格式xml文件和yolo格式txt文件) 图片数量(jpg文件个数)&#xff1a;4862 标注数量(xml文件个数)&#xff1a;4862 标注数量(txt文件个数)&#xff1a;4862 …...

Vue 3 30天精进之旅:Day 12 - 异步操作

在现代前端开发中&#xff0c;异步操作是一个非常常见的需求&#xff0c;例如从后端API获取数据、进行文件上传等任务。Vue 3 结合组合式API和Vuex可以方便地处理这些异步操作。今天我们将重点学习如何在Vue应用中进行异步操作&#xff0c;包括以下几个主题&#xff1a; 异步操…...

【网络】3.HTTP(讲解HTTP协议和写HTTP服务)

目录 1 认识URL1.1 URI的格式 2 HTTP协议2.1 请求报文2.2 响应报文 3 模拟HTTP3.1 Socket.hpp3.2 HttpServer.hpp3.2.1 start()3.2.2 ThreadRun()3.2.3 HandlerHttp&#xff08;&#xff09; 总结 1 认识URL 什么是URI&#xff1f; URI 是 Uniform Resource Identifier的缩写&…...

[paddle] 矩阵相关的指标

行列式 det 行列式定义参考 d e t ( A ) ∑ i 1 , i 2 , ⋯ , i n ( − 1 ) σ ( i 1 , ⋯ , i n ) a 1 , i 1 a 2 , i 2 , ⋯ , a n , i n det(A) \sum_{i_1,i_2,\cdots,i_n } (-1)^{\sigma(i_1,\cdots,i_n)} a_{1,i_1}a_{2,i_2},\cdots, a_{n,i_n} det(A)i1​,i2​,⋯,in​…...

docker部署SpringBoot项目简单流程

一、docker基础命令理解学习 1、常见命令 docker启动之前要关闭防火墙systemctl stop firewalld # 关闭防火墙systemctl disable firewalld # 禁止开机启动防火墙systemctl start docker # 启动docker服务systemctl stop docker # 停止docker服务systemctl restart docker # …...

Python学习——函数参数详解

Python中的函数参数传递机制允许多种灵活的参数类型&#xff0c;可以根据需求灵活配置参数&#xff0c;这使得函数具有更强大的扩展性和适应性。以下是对各类参数类型的详细说明&#xff1a; 1. 定义函数的不同参数类型 1.1 位置参数 定义方式&#xff1a;def func(a, b2) 特…...