How do the verification tools work?
There are two verification tools:
- The Video Inspector helps you to find out if a video has been used to make a fake or misleading claim
- The Image Inspector helps you check if an image is fake
With both tools, you can either upload a file from your computer or paste a media URL. The tool will then analyse it and show the results in a understandable format.
How can you get the links of videos and images?
To analyse an image, you’ll need its direct URL (this is the web link that leads straight to the image itself). If the image is part of a social media post – like on Facebook – you can usually get this by right-clicking on the image and selecting ‘Copy Image URL’. Please note that this only works on desktop computers.
To analyse a video, you can either paste a link (URL) or upload a video file from your computer. Uploaded videos must be no larger than 2GB and no longer than 30 minutes. The tool supports links from YouTube, Facebook, Twitter, Instagram, Vimeo, Dailymotion, LiveLeak, and Dropbox – though not all videos from these platforms can be accessed due to platform or user-specific restrictions. Also, the link should point to a single video (not a playlist). Supported video formats include mp4, webm, avi, mov, wmv, ogv, mpg, flv, and mkv.
What do the results of the tools mean?
The image inspector evaluates whether an image is synthetic (i.e. fake) or real. It provides four types of evidence based on the analysis, categorised by the following scale: very strong evidence, strong evidence, moderate evidence, and weak evidence suggesting that the image is synthetic. For more details about how to interpret the results, please refer to [link].
The video inspector helps you find out if a video has been used before by identifying websites that contain similar visual scenes. This can show where parts of the video may have already appeared online. It also finds near-duplicate videos – like reuploads, shortened versions, or videos with very similar visuals – which can help track how the content has been reused or highlight potential copyright concerns. In addition, the tool pulls out keyframes (important still images from the video) that can be used for further investigation. For example, you can do a reverse image search on these frames using tools like Google Lens to find more information about where they may have appeared online.
Can the results always be trusted?
The model has been thoroughly tested and provides a fairly reliable indication. However, it is still a computer model, so its results should always be interpreted with caution. When in doubt, it’s best to verify using multiple sources or expert judgment.
What does it mean if I get an inconclusive result?
An inconclusive result means that the tool could not determine with certainty whether the content is real or manipulated. This can happen when the available data is insufficient or unclear. It’s a signal that further investigation or additional sources might be needed to make a final judgment.
Is my search saved when I use the tools
The search data is stored anonymously for statistical purposes. We keep the date of the request, the URL of the media, and – in the case of images – the media file itself. Each time a piece of media is submitted, the system creates a unique digital fingerprint of it, called a ‘hash’. This fingerprint is a short code that represents the content without revealing any personal information. It helps us recognise if the same media has been analysed before, so we can quickly return the existing results instead of repeating the analysis.
Are the tools free to use?
All tools, training materials, and research developed through the VISAVIS Project will be freely available to the public. Our goal is to make digital verification accessible to everyone, ensuring that individuals, educators, and organisations have the resources they need to navigate online content critically and responsibly.
The verification tools will be open access and designed for ease of use, allowing users to check images and videos for manipulation, AI generation, or out-of-context usage. Technical documentation and user guides will be provided to support different levels of expertise. Additionally, research reports and analyses will be published under open licenses, ensuring that findings can be freely used and built upon by researchers, media professionals, and fact-checking organisations.
Do I need technical knowledge to use the tools?
No, you don’t need any technical knowledge to use the VISAVIS tools. They are designed to be simple and accessible for everyone. With just a few clicks, you can check visual content and find out whether something is real or manipulated.
How is the project funded?
The VISAVIS Project is a non-profit initiative funded by the European Media and Information Fund (EMIF), which supports initiatives that combat disinformation and promote media literacy in Europe. EMIF is a multi-donor fund established by the European University Institute and the Calouste Gulbenkian Foundation, with contributions from donors such as Google, to provide grants for fact-checking, research, and media literacy projects. The fund is managed by the Calouste Gulbenkian Foundation in partnership with the European University Institute, ensuring transparency and independence in its grant-making process.
Can I contact someone at VISAVIS?
If you have any questions or would like more information about the VISAVIS Project, you can reach out to us via email at visavis@iti.gr or use our contact form. We’ll make sure to get back to you as soon as we can. We’re happy to hear from individuals, educators, researchers, and organisations interested in media literacy and digital verification.