We use cookies, we record clicks and movements and we may even record your IP-address and corresponding geo-data in order to analyze our audience and subsequently change our website with the intend to give you the best experience and information. We use Google tools like Analytics, WebConsole and others to do so, which means, data we record will most likely be transmitted to Google. If you continue to use this site we will assume that you are ok with it. If you dont want that, we strongly urge you to leave our website now.I agree and like to continue!Privacy policy
With generative AI, spreading like wildfire and rapidly generating more and more content – not just through the act of real humans, but also through automated agency – who is to say, what is true and what isn’t?
For example, the discussions on the presidential election, about vaccines and regarding world climate, to mention just a few, may all be heavily influenced by content, of which we can no longer be sure, that it has been created by a human.
Consider this: The content that AI systems have at their disposal for training themselves, will be the basis for our reality of the future.
Similar to a snowball growing, the training data used for the training of our AI systems will increasingly be based on such artificially generated content, thereby growing the artificial element of it’s output as well.
If, in the light of any kind of competition, narratives will be created, offsetting the reality of contemporary and historic events and developments, then this too will be learned by our AI systems and influence what they generate.
In a world, where we inform ourselves using the Internet, who can tell, whose reality we are listening to?
Are we about to head into a future, where our hopes and fears are driven by artificially generated stories, curated by profit oriented mechanisms – similar to the life of the prisoners in plato’s cave?