Episode 211

Intelligent Design. Creative Industries At Peril.

Published on: 4th July, 2022

Neural networks can now create photo-realistic images from simple text inputs. As AI makes inroads into diverse professions, ranging from architecture to accounting, are these entry-level AIs displacing the next generation of white-collar workers?

Hosted by Matt Armitage & Richard Bradbury

Produced by Richard Bradbury for BFM89.9

Episode Sources: 

https://www.newscientist.com/article/2322056-will-ai-text-to-image-generators-put-illustrators-out-of-a-job/

https://www.newscientist.com/article/2264022-ai-illustrator-draws-imaginative-pictures-to-go-with-text-captions/

https://www.archdaily.com/982243/algorithms-and-aesthetics-the-future-of-generative-design

https://imagen.research.google

https://arxiv.org/abs/2205.11487

https://www.archdaily.com/982243/algorithms-and-aesthetics-the-future-of-generative-design

https://www.synthesia.io

https://playphrase.me

Photo by Axel Ruffini on Unsplash

Subscribe to the Substack newsletter: https://kulturpop.substack.com

Follow us:

Tw: @kulturmatt

In: @kulturpop & @kulturmatt

W: www.kulturpop.com

Transcript

Richard Bradbury: Last week we talked LaMDA, a supposedly sentient chatbot, that some parts of the Internet seem to think heralds the coming of the Singularity. We’re sticking with AI this week, but with its power to alter the way we work.

Richard Bradbury: Is this related to the boss bot episode we did back in November?

Matt Armitage:

• That was ep 189, I think. One of our better ones.

• They’re all better ones though. Which makes it an average one.

• Or something. It’s so hard being exceptional.

Richard Bradbury: replies…

Matt Armitage:

• Yes, so in the Boss Bot episode, we talked about the increasing use of AI as line managers.

• Especially in the gig sector where you log onto an app and it doles out your tasks and assignments.

• And constantly evaluates your performance.

• In some instances – because they are not employees of the company that runs the app – these so-called operators can be terminated automatically.

• For falling below the performance terms of the app.

• And we see some of that technology creeping into the regularized employment sector.

• Surveillance software on computers. Algorithms grading productivity and performance.

• I’m biased, but I think it’s worth a listen if you haven’t heard it.

Richard Bradbury: So, we’re on new ground today?

Matt Armitage:

• Yes, so this is something that we’ve alluded to in previous shows.

• We’ve covered various breakthroughs as they have occurred.

• But it’s not something we’ve gone into enormous detail about.

• It was actually quite a small story that I spotted on New Scientist that made me think that now would be a good time to go into this.

• It was a story about text to image generators.

• Which are pretty cool and quirky – seemingly one of those funky idea things.

• And I thought it would be fun to look at some of the potential implications.

Richard Bradbury: For the people who don’t live on the same cloud as you: would you like to explain what a text to image generator is?

Matt Armitage:

• It’s pretty straightforward: you type something and the machine makes an image out of it.

• So, it’s quite a good companion piece to last week, where we were talking about LaMDA.

• Which is a text-based conversation platform. We’ve covered a lot of these text based systems.

• Text to speech. Things like that.

• So, if I typed ‘red rose’ into a text to image generator, that’s what it would create.

• Depending on the parameters of the system, or the parameters I request.

• It could be a photo realistic rose, or a simulated sketch or illustration.

• In episode 202 we discussed an AI-based artist system called Aida, which paints people with purposeful errors.

• So that no two paintings of the same subject will ever be the same.

• And we mentioned during that episode that part of the purpose of that system was to question the role and purpose of AI.

• Are we ok with machine-generated art? Should AI be helping to fuel our creative endeavours?

• And if so: in what way or ways?

Richard Bradbury: What are the chances of you typing red rose into an image generator?

Matt Armitage:

• Slim to zero. Red roses growing through a zombie, – possibly.

• Obviously, the full text for that image is too graphic for me to mention on daytime radio.

• But I wanted something simple as an example.

• And, of course, the systems that outfits like OpenAI and Google are working on are already way beyond those basic operations.

• I think we covered – this must be a year or two ago – systems that predict what someone would look like from a line drawing.

• Great for police photofits and other less democratically pleasant purposes.

• I’ll go into some of the systems themselves in a minute. But we’re already at the point where these systems can create composites based on the words you input.

• Shades wearing cat playing guitar while a skateboard on a beach – just off the top of my head.

• That’s one of the actual examples.

• Or something more realistic like cougar in the savannah. Tick.

Richard Bradbury: Is that where we’re going with this: that this kind of technology could threaten the livelihoods of graphic designers and illustrators?

Matt Armitage:

• It could be quite destructive for the creative industries in general.

• Design tools like Canva – which isn’t AI-powered, or at least, not yet

• Have already made inroads into areas of the design agency business.

• If you don’t know what it is – Canva is an online design tool that works like a cross between a lot of Adobe’s much more expensive design, illustration and video editing tools.

• It provides thousands of templates for everything from social media posts to posters and in store creative.

• Which has made designing promotion materials as easy as drag and drop.

• Not a big deal for multinationals – L’Oreal probably isn’t going to put together its next campaign on Canva.

• But that’s the glamour end of the industry.

• The reality is that the industry really consists of small agencies and artists working for small businesses.

• Small budgets, small businesses.

• Tools like Canva are already starting to have an impact at that end of the business.

Richard Bradbury: Just in terms of cost advantage?

Matt Armitage:

• And in terms of speed. Agencies take time to turn things around.

• Social media is an instant tool. Most companies won’t have someone at an agency dedicated to creating their posts in real-time.

• If a small business does have an active and engaged community – these quick and dirty style tools can allow them to put out comms that match their needs.

• Lots of specials left over after your café’s lunchtime rush?

• Quick Canva post to tell everyone there’s 30% off specials before 5pm.

• Slow foot traffic through your hardware store on a Tuesday afternoon?

• How about booking Matt Armitage for a special instore appearance?

Richard Bradbury: Is that supposed to increase foot traffic?

Matt Armitage:

• You could book me to do an instore for your competitor if you really want to drive up your business.

• Tools like this are already changing the balance – making it easier for small businesses to create high quality content.

• AI-powered tools take that design process a stage further.

• Instead of a member of staff having to sit there and make the story post or a quick Tik Tok.

• You can type what you’re looking for and the machine will create it for you.

Richard Bradbury: I know we keep having this discussion, but can a machine really be creative?

Matt Armitage:

• For the majority of things we look for, creativity comes second.

• As I said, this isn’t L’Oreal, or Coke, or McDonald’s trying to maintain a global market.

• First of all, there’s the cost. What’s your small business spending on its creative agency?

• You could subscribe to a few of these AI tools – one for static images and posts, for example, and another for video.

• Add in a stock image service.

• You could still get change out of USD100 a month.

• And with the AI tools, you’re committing less of your own or an employees time to the task as well.

• So that’s another saving.

Richard Bradbury: Quality could still be an issue…

Matt Armitage:

• Of course. That comes back to you as a business owner knowing your brand and your identity.

• Changes are the bane of the creative industry. From the stultified designer begrudgingly making the changes.

• To the delays to completion that so irritate clients.

• The machine doesn’t care. You want a thousand different versions? Fine.

• A hundred revisions to the green tone to get it just so? And believe me, I’ve been there.

• For the machine: not a problem.

• And it won’t cost you anything but your time.

• I’ve told this anecdote before – about Joshua Davis, an American new media artist, amongst other things.

• Part of his artistic process at the time I spoke to him was the writing and design of the algorithm that would create the work.

• That’s his creativity. The machine outputs as many variations as he wants, and it’s that editing process, selecting the pieces for the final work.

• That makes it art. Let’s not forget that the Damian Hirsts of the world don’t pickle their own sharks or create their own jewelled skulls.

• They’re conceptual artists. They have craftsmen and support companies that make the actual work.

Richard Bradbury: Can we get into how these systems work?

Matt Armitage:

• Two of the biggest, or most advanced systems, belong to Google and to OpenAI.

• There’s no real surprises there.

• If you go to the page for Imagen – that’s the Google system – you’ll see some pretty incredible composite imagery.

• Google’s neural net, Imagen, uses a text to image diffusion model.

• Which basically means it has huge datasets that allow it to understand – I hope you can hear the inverted commas in my voice.

• Understand language and to translate those text prompts into images it uses a diffusion model to create high resolution images.

• I’ll add the links to the research paper, titled Photorealistic Text To Image Diffusion Models with Deep Language Understanding, to the show notes.

• OpenAI’s Dall-E2 is pretty similar, combining language understanding with these diffusion abilities.

• I wanted to get into how these systems work but I’ve talked myself into the break as usual, so we’ll come back to it in a minute.

• The key thing about the images is that phrase photo-realistic:

o These images look like the kind of highly produced finished product you’d see in TV ads and images.

• Which is quite an incredible thought: that they can be generated from a couple of lines of text.

Richard Bradbury: After the break: what on earth does any of this mean?

BREAK

Richard Bradbury: We’re back to one of Matt’s favourite topics this week, scary AI and how it’s going to destroy our lives. Before the break Matt promised to outline how text to image neural nets actually work.

Richard Bradbury: We’ve talked about the language understanding part in relation to LaMDA. Can we talk about those diffusion models?

Matt Armitage:

• Sure, this is the kind of magic bit.

• The AI starts off with a bunch of dots and shapes them into the images

• These aren’t composites of existing images.

• It rearranges the dots until they resemble the images from the datasets that trained it.

• What does that mean?

• Well, if you type Eiffel tower, it doesn’t drop an image of the tower it’s cropped from an existing image.

• It recreates it from scratch – using all the images in its data sets.

• Which allows you to create it from pretty much any perspective you want as well.

• And the same goes for the details that accompany the image: sky, skyline, shops.

• So you could put the Eiffel Tower in New York’s Times Square and make the sky pink and turn the sun into a rose. Or a zombie.

• You can also use it to enhance or alter existing images. One of the Dall-E2 examples replaces a dog sitting on a chair with a cat.

• Which it does seamlessly – at least in the examples we’re allowed to see – to the correct scale and orientation.

Richard Bradbury: Is there a concern with these systems about the potential for things like Deepfakes?

Matt Armitage:

• There are lots of concerns, although that’s not really the focus of today’s show.

• After the break I want to talk more about the potential impact of the technologies on jobs and the way we work.

• So far there haven’t been any public demos of the technologies – something that the NS article notes.

• That could be partly because of the potential to create deepfakes or perhaps even harmful results.

• Again, we come back to those biases in datasets. We don’t know if these systems have inherent biases baked in.

• Racism, sexism. So, it may be that the systems need more development and filtering before they can be released to the public.

• Which, to be fair, is the same with LaMDA.

• They’re currently seen only as developmental tools not public-facing ones.

Richard Bradbury: Is that why a lot of the images you mentioned have cute or cuddly animals in them?

Matt Armitage:

• Very much so.

• The tone is definitely light-hearted, more so with Imagen than Dall-E2.

• I mentioned the skateboarding beach cat earlier.

• Now you can take that in a number of ways – that this is a fun tool that can put together your wild flights of fancy.

o These are genuine examples – a dragonfruit wearing a karate belt in the snow.

o A marble statue of a koala playing a DJ set.

o A cobra made out of sweetcorn on a farm.

o A bluejay standing on a basket of macaroons.

• Or you can look at these cute interpretations as a way of normalising something that you find a bit scary.

Richard Bradbury: Which brings us to that wider point: the implication for industries and the people working in them?

Matt Armitage:

• Yes, so I’ll look at the creative industries first. But it isn’t limited to them as I’ll show later.

• It’s much broader.

• It’s the wider potential implications of using those tools that I want to look at in this part of the show.

• Let’s say these tools are released and they’re too expensive for your average small business to invest in.

• Unlikely, given the cost of some of the tools already on the market, which I’ll give a brief run-through of shortly.

Richard Bradbury: You think they could be used as a cost-saving measure for existing creative agencies?

Matt Armitage:

• That would be one purpose. Instead of having a dozen illustrators and graphic designers on staff.

• You can outsource all the creative grunt to a neural net in the cloud and your existing creative directors make the executive decisions about what the client sees.

• Imagine how much faster the process would be? The account handling team gets back from a meeting with the client.

• The brief is literally fed to the neural net which starts to churn out examples of what the client is looking for.

• So the creative director effectively acts as editor. Tweaking versions and text to get something to send back to the client.

• A process that typically takes days or weeks could now be turned around in a few hours.

Richard Bradbury: So there’s a combination of cost-saving and competitive advantage?

Matt Armitage:

• Exactly. I used those examples of big marquee campaign ads.

• But the reality of the business is that most of the creative work is very transient – insta stories, tweets.

• Things that come and go in 24 hours. So companies – understandably - aren’t willing to spend big bucks on them.

• They’re ephemeral, so their budgets are, too.

• So the business has become about business and razor thin margins.

• That’s been edging forwards for years –

• I’ve got friends who would do nothing for weeks except resize and reformat images for airlines to fit dozens of different banner ad sizes.

• With clients increasingly pushing for much shorter turnaround times as their own marketing is now subject to those same fast-moving consumer and cultural considerations.

• Technology that enables you to cut your headcount and increase your speed and efficiency sounds too good to be true.

Richard Bradbury: And presumably this isn’t limited to designers and illustrators?

Matt Armitage:

• No, so here’s the audio part of a video clip that I created:

• Play Clip

• I’m not sure if we can get this played on BFM’s socials. Maybe we can, I’ll have to get Richard to ask the higher powers that live in the social media bunker.

• You can always follow Kulturpop’s socials to see the clip.

• I created the video from a few lines of text.

• The AI turned it into a video with a realistic avatar.

• Again – not a human but an AI creation. It added a moving background and some very quiet music.

• This is the kind of thing you could use for infomercials, tutorials, presentations.

• And this is a commercially available cloud product that costs around USD$30 a month.

• You can go to synesthesia.io and create your own demo video.

[Cut yellow section for broadcast if it’s too long…]

Richard Bradbury: So we’re knocking off video creators as well?

Matt Armitage:

• Yes – we’ve both used Descript to generate audio versions of our own voices and use them on this show.

• Synesthesia does something similar, but with avatars. And it works in around 40 different languages, I think.

• There are tools that will automatically interpolate for you – boost and re-render the quality of grainy or badly shot footage.

• There are tools that will let you automatically delete something from a picture:

o It could be that ex-boyfriend who ruined your vacation shots.

o Or it could be a product image – you want to use an old background but substitute a new product or packaging without having to reshoot the video.

• This is really cool – a website called playphrase.me.

• You can type a phrase and it will create a customizable montage of clips from movies using those words or that phrase.

• Copyright means I couldn’t generate one for the show, but go check it out.

• Hours of fun.

• It’s the same story with colour correction. Fine tuning edits. Audio normalisation.

Richard Bradbury: But beyond the narrow confines of the creative industries…

Matt Armitage:

• I was going to quickly mention copywriting too.

• Most of the AI tools I’ve used so far have been lacklustre.

• The same as with chatbots – they get very samey and veer off topic quickly.

• But tools like Grammarly are amazing. My own copy is pretty clean,

• But whacking them into the premium version of Grammarly gives them an edge.

• It shows me words I’ve repeated too many times. Linguistic tricks I use too often.

• Over-reliance on the passive voice or sentences that lack definition.

• Something that even seasoned pros can benefit from.

• Does architecture count as outside the creative industries?

Richard Bradbury: I’ll let you have it…

Matt Armitage:

• Ai is increasingly being used in the profession.

• Generative design tools and machine learning can help architects and urban planners to create cities and developments that meet contemporary needs.

• You can model the flow of traffic and people, and highlight potential flaws or congestion points.

• Using big data to analyse satellite imagery is becoming an increasingly useful way for urban planners in developing countries to map urban sprawl and shanties.

• And plan how to reach them with basic services and amenities.

• So there are all these new ways to build quality of life measurements into your initial design.

• And build adaptivity into it.

• For example, equipping those designs with systems and sensors that allow you to monitor how they’re being used in real time.

• And to make changes to neighbourhood spaces to accommodate those use cases.

Richard Bradbury: But those are still essentially assistive tools…

Matt Armitage:

• Some of the generative design tools are standalone.

• But we can go further, a startup called Higharc aims to enable consumers to design their own homes using their proprietary AI.

• And bypass the architect completely.

• Apparently, the software can even produce production-ready blueprints that you can pass to your builder.

• Data is pulled in from satellite imagery, so the homes created are specific to the plots and surveyed land data they’re built on.

• The hope is to make it easier for people to get the house they want, at a price they can afford.

Richard Bradbury: So these tools are working hand in hand with the kind of assistive or replacement technologies we’re seeing in law, accountancy, medicine and other professions?

Matt Armitage:

• Yes, and this brings me to that broader point.

• It’s great that we have these tools to take away the drudgery.

• To help small businesses and ordinary people take advantage of cutting-edge services or production methods.

• All of those things are great.

• I won’t even talk about the job losses part – I want to talk more about the skills gap I can see it creating.

• If we go back to the design tools, or equally the architecture tools.

• Those work because there is still a seasoned, experienced professional, a CD or a master architect.

Richard Bradbury: They’re the ones signing off on the content created by the machines?

Matt Armitage:

• Exactly. They’re the ones doing the tweaking or substituion.

• Or asking the machine for new versions and alternatives.

• They exist because of the current legacy system where you qualify, get a job and work your way up in the profession.

• This technology disrupts that entire system.

• The machines are now doing those entry level tasks.

• That works fine now. Those other generations are already working their way up.

• But where will that next generation of senior designers, architects, accountants, lawyers come from?

• Without that system and that hierarchy, how can someone enter an industry and gain experience?

• More to the point – who is going to take on the cost and time of training to be a lawyer or an architect when there are so few entry-level positions?

Richard Bradbury: Please don’t say that the answer is more machines…

Matt Armitage:

• I don’t actually have an answer.

• I’m not sure anyone does yet.

• Sure, more and better machines are one solution.

• But that comes with the risk that we become that society where machines do all the actual work.

• Or rather, they do all the work they can do, and it becomes about us feeding them, doing the work they can’t.

• Which is likely to be increasingly menial.

• With us trying to cope with their flaws rather than the other way around, which was probably the original intention.

• It’s one of the problems with our current ad hoc approach to artificial intelligence.

Richard Bradbury: Deciding what it should be rather than just accepting what it is?

Matt Armitage:

• I think that’s a really nice way to frame it.

• That tool to give people the power of architecture without needing an actual architect.

• That’s great – but is that desire, greater than our need for human architects?

• The same with urban planners – machines may be more efficient to a point – but they don’t understand humanity.

• Which limits their ability to design for it.

• These aren’t conversations we’re having: the technology comes and we accept it.

• We’re not looking at that bigger and longer term picture and saying: this is the direction the technology is taking us.

Richard Bradbury: And asking if this is the direction we want to be going in?

Matt Armitage:

• We have to determine the role we want AI to occupy in the design and implementation of our future.

• And carve out the roles and positions we want to keep for ourselves.

• It sounds selfish but it comes down to some basic fundamentals:

• We have essentially capitalist systems across the world.

• We work, we earn, we buy stuff.

• In a future where machines do all the work, we don’t earn, we can’t buy stuff and the support structures of that system collapse.

• Like I said – I don’t have an answer – maybe that’s a future you want to pursue.

• Irrespective, these are conversations we need to be having now, before it’s too late.

• And our future is written in code.

Next Episode All Episodes Previous Episode
Show artwork for MSP [] MATTSPLAINED [] MSPx

About the Podcast

MSP [] MATTSPLAINED [] MSPx
MSP takes you into the future. Every week we look at advances in science and technology and ask how they will change the world we live in. And discuss how we can use our power and influence to shape the society of tomorrow.