行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别行人充重识别
2024-03-08 20:34:57 418KB 人工智能
1
预训练模型
2022-09-20 12:05:25 302.89MB REID
1
The SYSU-MM01 dataset is an RGB-Infrared (IR) multi-modality pedestrian dataset for crossmodality person re-identification. The Intelligence Science & System Lab (iSEE) of Sun Yat-sen University will provide the SYSU-MM01 dataset freely of charge to researchers. Every pedestrian captured in the dataset has signed a privacy license to allow the images to be used for scientific research and shown in research papers. SYSU-MM01是一个流行的跨模态ReID数据集,其中包括来自4个可见光摄像机和2个红外摄像机的491个行人。 训练集包含19,659幅可见图像和395人的12
2022-07-11 14:15:19 213.49MB ReID
1
fast-reid-master-20200831.zip,是https://github.com/JDAI-CV/fast-reid,20200831下载的版本。该项目应该是在快速迭代开发的过程中,变动比较大,所以放在这里,侵权请联系我删除噢。
2022-07-06 15:07:16 387KB reid
1
针对于目标检测功能中的后续,目标处理,包括不同目标之间的特征处理,特征匹配。适用于人车非模型的后续去重,还有轨迹跟踪部分的应用
2022-06-20 16:05:42 40.94MB 人车非 深度学习 reid模型 跟踪
AI科技大本营公开课-《跨镜追踪(ReID)技术分享》 共52页.pptx
2022-05-31 09:11:47 3.88MB 人工智能 科技 文档资料
AICity-reID 2020(第二轨) 在此存储库中,我们将2020 re-id曲目的第一名提交(百度提交) 我们融合了在Paddlepaddle和Pytorch上训练的模型。为了说明它们,我们分别提供了以下两个训练部分。 我们在包括培训代码。 我们在包括培训代码。 表现: AICITY2020 Challange Track2排行榜 队名 地图 关联 百度-UTS(我们的) 84.1% 瑞亚爱 78.1% DMT 73.1% 提取的特征,相机预测和方向预测: 我已经更新了功能。您可以从或下载 ├── final_features/ │ ├── features/ /* extracted pytorch feature │ ├── pkl_feas/ /* extracted paddle feat
2022-05-17 00:01:04 8.91MB pytorch vehicle paddlepaddle vehicle-reid
1
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.
2022-03-26 14:11:37 26.39MB 人脸识别 行人Reid
1
这个部分包含了19篇cross-module ReID 和1篇人脸识别的paper及阅读笔记,从2017-2020目前能找到的所有的跨模态RdID 文章,方便大家使用
2022-03-20 14:38:20 53.15MB 人工智能 跨模态 人物重识别 cross-module
1
跨镜追踪(Person Re-Identification,简称 ReID)技术是现在计算机视觉研究的热门方向,主要解决跨摄像头跨场景下行人的识别与检索。该技术能够根据行人的穿着、体态、发型等信息认知行人,与人脸识别结合能够适用于更多新的应用场景,将人工智能的认知水平提高到一个新阶段。
2021-12-23 08:04:09 10.19MB 技术 科技 人工智能 计算机视觉
1