Drawing Artificial intelligence: DALL-E 2

Artificial intelligence has been in pop culture through films such as The Matrix, Free Guy, Terminator, and others with many depicting artificial intelligence as a negative impact. The likes of Microsoft, Google, Amazon, Samsung, and Apple have invested heavily in the AI space. There are smaller companies such as Open AI who have created their new version of DALL-E, a text-to-image generation program.  

What is DALL-E 2?

DALL-E 2 is an artificial intelligence program that produces pictures depicting descriptions written by users. DALL-E 2 is the latest version of DALL-E released in January 2021. DALL-E 2 includes new capabilities like editing an existing image which allows users to select an area and tell the model to edit it. The model can remove or fill objects while accounting for details like the directions of shadows in a room. DALL-E 2 can blend two images and generate images that have elements of both.

DALL-E 2 is not being released to the public but researchers can sign up online to preview the system.

How Does DALL-E 2 Work?

DALL-E 2 is built on CLIP, a computer vision system that OpenAI announced last year. The system can look at images and summarise their contents the way a human would. Open AI then inverted that process to allow users to start with a description and the system works its way towards the image based on the description given by the user.

Video: DALL-E 2 Explained by OpenAI

Issues With DALL-E 2

Like with any artificial intelligence system there are potential dangers that come with using them. Open AI have highlighted certain dangers that they will continue to examine when building on the system such as bias in image generation or production of misinformation. Open AI has addressed those issues using technical safeguards and new content policies. For instance, there is a watermark indicating the AI-generated nature of the work and the tool cannot generate any recognisable faces based on a name. Users are banned from uploading or generating images that are not G-rated and could cause harm including anything involving hate symbols, nudity, obscene gestures, major conspiracies or events related to major ongoing geopolitical events. These safeguards will be important if the company decides to make the tool available to third-party apps.

Summary

The development of DALL-E 2 has shown the advanced capabilities of artificial intelligence. The detail and accuracy of the image generated from the description given by the user are eye-opening. Fake news is a huge problem on social media particular during the height of the pandemic. Tools such as DALL-E 2 and Jasper (AI Copywriter) have the potential to generate realistic fake news and it will be up to developers such as OpenAI to implement safeguards to prevent such content from being generated. Responsibility also falls to the user, an incident occurred when Twitter users were able to turn an innocent Microsoft AI chatbot to become a racist. As artificial intelligence continues to advance, developers, companies, and government institutions will need to implement safeguards to ensure artificial intelligence is not misused and is designed to benefit well-being rather than make it worse.  

Video: AI Made This Thumbnail by Marques Brownlee

Previous
Previous

Next Flagship Killer: The Nothing Phone (1)

Next
Next

How to Stay Safe on the Internet and Protect Your Identify?