Low Contrast
Foreground and background have highly similar appearance.
A High-Quality Dataset for Camouflaged Motion Object Detection in the Wild
CAMotion is a large-scale benchmark for studying camouflaged moving objects in realistic videos. It is designed to support robust evaluation, reproducible comparison, and future research on motion-aware camouflage understanding in unconstrained real-world scenes.
Introduction
A benchmark website should answer three questions clearly: what the dataset is, why it matters, and how others can use it.
CAMotion provides curated in-the-wild videos containing camouflaged moving objects with dense annotations. It focuses on realistic situations such as low contrast, background blending, clutter, occlusion, motion ambiguity, and changing illumination.
Given a video, the goal is to identify and segment the camouflaged moving object(s) frame by frame. The benchmark can support both segmentation-style pipelines and motion-guided video understanding methods.
Existing video benchmarks often emphasize salient or clearly visible objects, while camouflaged motion remains much less explored. CAMotion is created to encourage algorithms that can reason about subtle appearance cues, motion consistency, and difficult foreground-background ambiguity.
Key Features
This section is inspired by benchmark homepages that quickly summarize the dataset’s value proposition. [oai_citation:1‡vision.cs.stonybrook.edu](https://vision.cs.stonybrook.edu/~lasot/?utm_source=chatgpt.com)
Natural scenes with complex backgrounds, realistic motion, and challenging camouflage patterns.
Frame-level masks for reliable training, fair comparison, and detailed error analysis.
Standardized train / val / test splits with clear metrics and reproducible evaluation.
Download links, codebase, evaluation scripts, and citation info in one clean entry point.
Dataset Scale
Put the most convincing numbers here. This part should let visitors understand the dataset scale in 5 seconds.
You can later replace this block with a pie chart / bar chart image.
Protocol & Results
Strong dataset websites separate metrics, baselines, and evaluation entry points clearly. GOT-10k and MOSE both emphasize benchmark protocol and evaluation access. [oai_citation:2‡got-10k.aitestunion.com](https://got-10k.aitestunion.com/?utm_source=chatgpt.com)
Define the official setting here, such as whether methods may use optical flow, pretraining, or external data.
Add the evaluation server or submission instructions if you plan to keep the test labels private.
建议你把最重要的 4–8 个方法放这里,更多方法可以跳转到 GitHub 或 evaluation page。
Visual Examples
MOSE 的页面里一个很强的点是把复杂场景类型可视化展示出来,这对说服力很有帮助。 [oai_citation:3‡mose.video](https://mose.video/)
Foreground and background have highly similar appearance.
Target disappears partially behind cluttered structures.
Background movement and camera motion interfere with perception.
Tiny camouflaged targets with limited discriminative cues.
用 1 个总 teaser video 最好。也可以额外放 2–4 个 challenge short clips。
Access
Download area should be simple and explicit: data, annotations, metadata, code.
Main visual data package.
Pixel-level annotations and target masks.
Official train/val/test split files and metadata.
Training, inference, and evaluation scripts.
Support
Replace this answer with your official evaluation protocol and submission instructions.
State whether external pretraining or extra annotations are allowed for fair comparison.
Add a contact email or GitHub issue page here.
Reference
Please cite CAMotion if you find it useful in your research:
@article{camotion,
title = {CAMotion: A High-Quality Dataset for Camouflaged Motion Object Detection in the Wild},
author = {Siyuan Yao and Hao Sun and Hai Long and Ruiqi Yu and Jiehong Li and Xiwei Jiang and Yanzhao Su and Wenqi Ren and Xiaochun Cao},
journal = {Under review},
year = {2026}
}