Bigfoot investigation has reached a plateau. To be more effective, new techniques and approaches are needed. AI presents an opportunity to move Bigfoot research and investigation forward.
The Potential of AI in Bigfoot Field Investigation
While AI is a menace that might destroy all life on the planet, it also opens up a new frontier of capabilities for Bigfooters who want to adapt to a completely new set of tools. The next generation of Bigfoot field investigators are going to have to learn to utilize technology on steroids.
In Bigfootery, AI has resulted in even more social media clutter than usual. That’s because generative AI art gives you everything except what you want. Kind of like a slot machine. Your odds increase only if you pay for better tools. You get 50 times more accuracy than you would with public tools like ChatGPT.
To be useful, data has to be trustworthy. That means ChatGPT is only useful for recreational purposes, not as a reliable tool for Bigfoot investigation. Field investigation data can’t mix with unreliable public data. To train a Bigfoot LLM, avoid any sources from which there are no guardrails outside the control of a Bigfoot data-owner.
With AI, the results you get are based on the strength of data known to the AI model, also known as a Language Model (LM). Without specialized knowledge, it’s not possible to build an LM. Not even by using AI to create an AI model! Bigfooters will eventually be able to tap into an out of the box LM that can be customized. These are starting to become available in mainstream markets. Google’s NotebookLM is impressive but expect subscription costs.
It’s a tangible goal for Bigfooters to utilize a custom LM to try to construct a representation of their data collected in the field, whether it be audio, video, thermal or drone footage or metadata like distances between objects, terrain descriptions, time of day, weather, as well as subjective details relevant to an area of study.
Some examples of useful AI projects might be training a model to create a visual reconstruction of a sighting scenario or interpreting a toxicology report of soil samples. It would be a leap forward if someone trained a proprietary LM that interpreted LiDAR (Light Detection and Ranging) data. AI is amazingly efficient with remote sensing tasks. With the rising popularity of drones, it’s becoming less far-fetched to scan large swaths of terrain with thermal imagers and LiDAR and collect footage.
LiDAR is an under-utilized technology in Bigfootery. A LiDAR-equipped drone can reveal subterranean features of a landscape like caverns. Proprietary AI is making great strides in being able to create 3D or even 4D maps extremely quickly. Four-dimensional AI models are emerging in world marketplaces such as automobiles, robotics, medical, and science industries. That means it will also soon be in the hands of consumers.
AI would save countless hours of today’s processes of stitching together anomalies in thermal images using a video editor. Human eyes cannot reliably detect subtle movements and patterns of objects in thermal images. That’s a fact. If an animal or some creature chooses to be stealthy then humans eyes are not going to be able to detect it from raw thermal or standard drone footage captured at altitude.
The same logic applies to spectrograms. Soundwaves are the vibration of particles of the medium through which a soundwave moves. Human eyes and ears won’t notice the patterns in the same depth that machine learning algorithms can detect wave movement.
Bigfoot Audio Data
Of particular interest to many Bigfooters is figuring out a way to use AI to interpret alleged Bigfoot vocalizations in audio. It turns out, AI is amazingly efficient at interpreting bio-acoustic data. Though, deciphering a Bigfoot language is likely not a realistic goal just yet. Let’s wait for AI to accurately interpret a significant portion of any animal language before jumping into that one.
For this ambitious idea and others to be effective, you would have to create algorithms to extract unique features from audio data of which there are a multitude of metadata to consider. Most likely, someone has developed code to do this already for general field surveillance of animals.
Digital sound recorded in the field can be parsed into a digital representation and compared with other data sets. AI likes to compare data to build an understanding of something. Its outputs reflect what the LM thinks it has learned from the data it is provided with. An AI model is going to need to be fed new data for it to evolve. The cleaner the data is, the more reliable the outputs will be.
Clean audio data is uncompressed, lossless with minimal background noise. If you are working with audio then consider formats to be of critical importance in obtaining clean data. Always record with lossless compression, such as FLAC and WAV. Those formats are larger and consume more bandwidth and storage space but the tradeoff is reliability. Uncompressed audio formats preserve sound signals in its pure form without any loss, which increases the accuracy of recognition. I’ll talk more about how to achieve clean data in part 2.
Whew! If you made it this far, then you already know, AI is very energy intensive so don’t forget to bring mobile chargers into the field.