Follow Ben on Twitter
Follow Ben on LinkedIn
Check out Ben's OSINT at home YouTube tutorials
Check out the Russia-Ukraine Monitor Map
Learn more about the Centre for Information Resilience
When I really think about it, the position that I am in now within the open source community isn’t due to specific skills, techniques or much at all to do with open source, but really it boils down to four specific things:
Hard work and persistence. Open source investigative work involves heavy digging over long periods of time and often doing quite repetitive work. Amidst that you also have to keep up with new skills and sharpen your axe over and over again.
Obsessive and opportunistic. Many brilliant investigators I have known have been borderline obsessive about their work. There’s no glorifying my personal story with open source, while it could look successful on the surface, I have sacrificed social relationships, sleep, skipped seeing family, avoided holidays and worked weekends all in the bid to work harder and do more.
Storytelling. Some of the investigations I’ve worked on or collaborated with in the past have never resonated due to their complexities or intricacies of ‘assumed knowledge’ but learning to tell stories and shape your investigations or findings into ‘why is this important’ has helped me a lot.
Time management. Learning this has been the one strength I have in limiting burnout, prioritising tasks, and getting large volumes of work done.
Pick a very niche skill and master it. There are a lot of generalist investigators moving into this field at a rapid pace, many of whom have done familiarisation courses and training to learn OSINT and open source investigations, but to make a real stamp on this industry it’s helpful to choose a specialisation and master it. Watch courses and tutorials on YouTube, use Udemi, textbooks from university courses, ask your friends in IT, GIS, intelligence, conflict studies and others what they learn and use all of that to master your niche.
Don’t try to do everything. I often tell investigators in my teams that if they chase many chickens, they will catch none. But if they plan, assess, watch and chase one chicken, they will catch it. The focus should be not on doing everything, but on getting products (like reports, twitter threads, presentations) completed.
Don’t follow concrete frameworks. One of the biggest flaws I have seen in very large organisations and departments in open source is a ritualised framework of approach to investigations whereby processes are followed by the letter. With the rapid advent of technology and ever changing tactics of threat actors, a framework developed by someone two years ago is not going to be successful in an investigation today. Of course, linear workflows and steps are great, but make sure you’re not applying an ‘if this, then that’ approach. For some of these frameworks, when a new brick wall is put in place, you need to be a bit creative to learn how to climb over it.
I wish I learned time management earlier. Using programs such as ToDoIst to prioritise tasks, dedicate time and stop myself from spending unnecessary time on doing things such as crawling through a desert on Google Earth just to find one image that isn’t really important. I also use the Pomodoro technique a lot to ‘block’ periods of time, take breaks and reassess where I am at in my research or project.
There’s a number of facets to the role as a director of investigations, management is definitely one. Managing open source teams requires developing specific case knowledge, building a creative approach to investigations, and knowing how investigations can be visualised.
But more than that, it’s about getting products moved from initial research to a publication and working with teams to get those investigations maturing into final products. This means there’s a lot of ‘jumping’ into investigations teams when there’s problems and having flexibility and experience to sort those problems and help teams find solutions.
Of course, while being in management, I am still an investigator at heart and specifically driven to create mapping products with data and see them used in a bigger setting – which for many of our projects involves informing media, supporting justice and accountability measures, sanctions and combatting hostile narratives.
So whether it’s a specific geolocation using Google Earth, Sentinel or commercial satellite platforms, or whether it’s mapping data using Maphub, Gephi, or QGIS to make new findings and new trends then I am in there at the coalface chipping away at the data and translating that to give data new meaning.
One of my core roles is the management task of ‘putting out fires’, solving problems and challenges faced by teams and coming up with solutions in open source investigations. It’s also what I enjoy the most as well, as these pressure-driven environments where challenges are faced often result in the most creative or inspiring approaches and are a breeding ground for new techniques and findings.
For Ukraine, our sources have varied. First and foremost is the footage filmed by people in Ukraine and neighbouring countries. We’re getting most of this footage from Telegram, Twitter and TikTok. Of course, much of our sourcing also comes from satellite imagery as well, so we have been heavily reliant upon good imagery from Planet, using both their daily coverage as well as high resolution imagery to identify Russian firing positions, movements and tracking Russia’s invasion of Ukraine, but also imagery form Sentinel Hub, Maxar and others. Geospatial data we’re using also comes from NASA FIRMS, population data and others to give us a full picture of what’s happening in Ukraine.
The other aspect of sources come from niche points, for example, we’re tracking a lot of CCTV footage of Russian forces along roads, highways and even where they are setting up their bases and using intercepted signals. These specific sources are vital when it comes to the attribution of showing what unit was where and when.
For us, sources are visual. That’s where we are really relying on the evidence for almost all of the work we do – visual evidence. So in that respect, we test that source in multiple ways to put it under pressure and see if it withstands that scrutiny, such as testing the location against satellite imagery, testing the time to see when it was taken and scrutinising what is actually seen in the image or video, so that there are no assumptions, just facts.
What we’re doing through this process is creating a database of reliable and verified sources of evidence, so that further work can be done by threading those sources together in cases to further test their veracity and provenance.
One of the important aspects we also question is how information is being shared, and why is it being shared. In these geopolitical environments, and especially in conflicts, narratives are being injected into the information space to distort truths and bend information – so we question every source to identify why something might be shared, and test that source for intention.
What we’ve seen among stakeholders in some areas that we’ve worked in is a misunderstanding of definitions of our workflows or products. For example, in some areas when we speak about verification of footage we’ve had stakeholders assume we’re contacting the original uploader of footage and asking them where they filmed it to verify the footage instead of geolocating the footage with satellite imagery. While that’s an understandable assumption – it is not what we do and not what we consider to be independent verification.
This same lack of definitions is also seen in the aspect of influence operations, where we have seen numerous points of misunderstanding about terms such as ‘bots’, ‘inauthentic behaviour’ and ‘disinformation’. I won’t go into those terms, but in producing reports and investigations, while it might appear to be repetitive, it is important to define those terms rather than assuming an audience has the same knowledge.
Don’t learn how to do something, learn how to not do something wrong. With the advent of new tools and new data coming out at a rapid pace, it’s all well and good knowing how to use it or how to do cool tricks. But this can cause big mistakes if you read and interpret the data wrong. So having that mental ‘red teaming’ of every new skill you pick up is quite important given the rapid amount of data we’re consuming as professionals in this space.
Never publicly guess. OSINT and open source techniques are becoming very popular very quick, especially over the past few years. Many experienced practitioners in the field have some level of influence and a following in the industry and can have an impact when they make a guess or an assumption about data – it’s good to remember that before publishing anything.
Take care of your mental health and get some daily movement.
Spreadsheets and a good traffic light system. It’s kind of like taking a massive bag of skittles, throwing them all over a table and then sorting into colours and looking at which one you have the most of. In a spreadsheet, I like to traffic light everything from red to green for priority, violence level, follow up and more. For some of our projects we have more than 30 columns of categories and tags.
Once that’s all in a nice spreadsheet format, and the same types of tags and markers are used, then synthesising information can be a dream. For example, doing mass geolocations on a country, and then using spreadsheet filters to filter results down to a location, time and actor is such a useful analysis of important data.
[note: this interview has not been edited]