Personal project
Personal project
Personal project
Artificial Unintelligence
Junk paper
Artificial Unintelligence
Artificial Unintelligence
Machine learning by doing
One man's waste is another man's stationary.
Machine learning by doing
Machine learning by doing
To learn a bit about neural networks and the practical applications of AI, I played around with OpenAI’s machine learning model GPT-2. What I quickly realised is how unpredictable it can be. While grammatically correct, its outputs are often utterly absurd, and I. AM. OBSESSED.
Working with the streaming service All 4 at the time, I naturally decided to feed the network a bunch of programme descriptions and let the AI generate a series of shows from an alternate reality. I present to you, TV from a computer’s fever dream.
To learn a bit about neural networks and the practical applications of AI, I played around with OpenAI’s machine learning model GPT-2. What I quickly realised is how unpredictable it can be. While grammatically correct, its outputs are often utterly absurd, and I. AM. OBSESSED.
Working with the streaming service All 4 at the time, I naturally decided to feed the network a bunch of programme descriptions and let the AI generate a series of shows from an alternate reality. I present to you, TV from a computer’s fever dream.
To learn a bit about neural networks and the practical applications of AI, I played around with OpenAI’s machine learning model GPT-2. What I quickly realised is how unpredictable it can be. While grammatically correct, its outputs are often utterly absurd, and I. AM. OBSESSED.
Working with the streaming service All 4 at the time, I naturally decided to feed the network a bunch of programme descriptions and let the AI generate a series of shows from an alternate reality. I present to you, TV from a computer’s fever dream.
GPT-2 is a text generation tool that creates a continuation of any text you give it, using a fully trained neural network. It's configurable to produce any length of text on practically any topic. It was only trained to predict the next word in a text, but it unexpectedly learned basic competence. In some cases translating between languages and answering questions — without ever being told it would be evaluated on those tasks. Creepy.
The network was trained with 40GB of random text from 8 million web pages. Instead of using a typical crawler, OpenAI wanted to curate somewhat relevant and good-quality content – so they used websites mentioned in random Reddit posts with a high level of ‘upvotes’.
The diversity of the training material behind GPT-2 means it performs a lot better than other language models that have been trained on specific domains (like Wikipedia, news, or books). Its ability to generate text breaks a number of records in comprehension and language.
GPT-2 has some great applications, like better speech recognition and translation – but it also has some dangerous ones. Such as making it cheaper to create malicious content and phishing at scale. Earlier work that OpenAI did on synthetic imagery, audio, and video, shows that this kind of technology can play a big part in advancing a dangerous new movement of disinformation campaigns. Because of this, the full dataset has not been released.
GPT-2 is a text generation tool that creates a continuation of any text you give it, using a fully trained neural network. It's configurable to produce any length of text on practically any topic. It was only trained to predict the next word in a text, but it unexpectedly learned basic competence. In some cases translating between languages and answering questions — without ever being told it would be evaluated on those tasks. Creepy.
The network was trained with 40GB of random text from 8 million web pages. Instead of using a typical crawler, OpenAI wanted to curate somewhat relevant and good-quality content – so they used websites mentioned in random Reddit posts with a high level of ‘upvotes’.
The diversity of the training material behind GPT-2 means it performs a lot better than other language models that have been trained on specific domains (like Wikipedia, news, or books). Its ability to generate text breaks a number of records in comprehension and language.
GPT-2 has some great applications, like better speech recognition and translation – but it also has some dangerous ones. Such as making it cheaper to create malicious content and phishing at scale. Earlier work that OpenAI did on synthetic imagery, audio, and video, shows that this kind of technology can play a big part in advancing a dangerous new movement of disinformation campaigns. Because of this, the full dataset has not been released.
GPT-2 is a text generation tool that creates a continuation of any text you give it, using a fully trained neural network. It's configurable to produce any length of text on practically any topic. It was only trained to predict the next word in a text, but it unexpectedly learned basic competence. In some cases translating between languages and answering questions — without ever being told it would be evaluated on those tasks. Creepy.
The network was trained with 40GB of random text from 8 million web pages. Instead of using a typical crawler, OpenAI wanted to curate somewhat relevant and good-quality content – so they used websites mentioned in random Reddit posts with a high level of ‘upvotes’.
The diversity of the training material behind GPT-2 means it performs a lot better than other language models that have been trained on specific domains (like Wikipedia, news, or books). Its ability to generate text breaks a number of records in comprehension and language.
GPT-2 has some great applications, like better speech recognition and translation – but it also has some dangerous ones. Such as making it cheaper to create malicious content and phishing at scale. Earlier work that OpenAI did on synthetic imagery, audio, and video, shows that this kind of technology can play a big part in advancing a dangerous new movement of disinformation campaigns. Because of this, the full dataset has not been released.
Other projects
Get your binge onChannel 4
Believe in better hearingAmplifon
Helping Britain BankNatWest / RBS
Enabling Good Hair DaysAll Things Hair (Unilever)
Tailoring a white label brandUnilever
Redefining visitor experiencesYas Island
Tackling Toxic MasculinityAxe / Lynx
PowerPoint BingoBored-game
Junk PaperUpcycled notepads
Various postersGraphic Design
Chubby fontTypeface
It's Virtually ChristmasVR/AR Tech exhibition
BeatballsUtterly weird
Anders Kristoffersen
Designer
Making cool shit in:
Making cool shit in:
Making cool shit in:
Lofoten Vesterålen Trondheim Southampton Stockholm London
Lofoten Vesterålen Trondheim Southampton Stockholm London
© 2022
© 2022
© 2022