=== Section 1: Start Here… ===
— Course Overview & Roadmap: Start Here —
1
In this course, I’ll show you how to
2
make AI video.
3
But that’s a huge sentence, what exactly does
4
it mean and how is this course structured
5
to teach you?
6
Now I’ll take you from no knowledge, complete
7
beginner to be able to make videos, short
8
films, adverts, entire videos, anything you can think
9
of.
10
Now I’ll do this by clearly showing you
11
the entire process, tools used to go from
12
idea, script, AI images to AI videos.
13
Follow along in the course as I create
14
an AI video from scratch and you’ll see
15
all the tools we suggest that you use
16
and even some that we don’t.
17
Also, this course is over 18 hours long,
18
so some of you won’t need all of
19
it.
20
If I just direct your attention over to
21
the right hand side, I’ll go through in
22
depth what’s in these sections, but you can
23
skip through if you want to.
24
We have the start here section, all about
25
starting and what’s needed, some fundamentals of course.
26
Then we’re going to get under some workflows.
27
That’s how you make an AI video from
28
start to finish, what’s needed, different steps.
29
Then we generate some AI video ideas with
30
different AI tools, then make scripts and structures.
31
Then we’ll go on to some music generation
32
and AI voices with some different tools in
33
there.
34
Then we’re going to go and make a
35
style guide, that’s a mood board, and then
36
a storyboard, which are really helpful and they’re
37
the start really of the production process for
38
AI videos.
39
Then if you want to skip forward to,
40
here is the AI image generation, all the
41
different tools we’re going to use to create
42
images with AI video.
43
Then after that section, of course, we get
44
on to the how to make AI videos
45
and all the different tools, Runway, Sora, Luma,
46
Hapia, Pika in video, et cetera, et cetera.
47
After that, we go on some AI sound
48
effects and editing and some upscaling.
49
If you want to skip forward to some
50
sections, depending on your ability, that’s quickly the
51
layout of this course.
52
More than 30 AI tools used in this
53
course and counting.
54
Now people learning AI video or thinking about
55
it come up often against three pain points.
56
Number one, they want all the information in
57
one place and clear.
58
So we’ve structured this course first going over
59
some AI fundamentals and best practices on how
60
to create a video.
61
Then we use AI tools to generate an
62
idea using ChatGPT, Gemini, Claude, Copilot, Perplexity, and
63
even more.
64
Then if you need a script or structure
65
for your video, we’ll go over this with
66
Squibbler, Chatsonic, TextCortez, ChatGPT, and even more.
67
Then let’s create audio voiceovers using 11Labs or
68
even create our own music using Suno or
69
Udio.
70
I’ll show you those.
71
Then we’ll create amazing images with AI using
72
Midjourney, Dali, Meta, Grok, Stable Diffusion, Runway Image,
73
and even more.
74
Then it’s time to turn these into incredible
75
videos using RunwayML, LumaDreamMachine, Pica, InVideo, Kling, Kaiba,
76
and the amazing Google VO3 with some really
77
good realistic results, Hapier, and even more.
78
We can then lip sync these videos and
79
we can upscale them to make them look
80
even better using tools like Topaz.
81
The course is clear, structured, well-explained, and
82
shows you all the best practices for these
83
tools.
84
The second pain point was that the industry
85
updates so quickly.
86
Every week it feels like there’s a new
87
tool or update coming out with AI.
88
As mentioned, every single week or month, I
89
will update this course showing you updates to
90
tools you’ve already learned or brand new tools
91
that are coming out.
92
Don’t worry.
93
When an amazing new tool is released, this
94
course will cover it.
95
The third pain point was they want a
96
course to be very clear and well-explained.
97
We show you every single step of the
98
way very clearly to follow along.
99
There are optional tasks in the course to
100
do to make sure your knowledge is on
101
point.
102
You also get exclusive access to our site
103
and pages which enable you to read through
104
step-by-step how to use the tools,
105
and there’s downloadable guides and a workbook if
106
you want to follow along that way too.
107
You will never ever be lost on this
108
course.
109
Now, I was just like you.
110
I wanted to learn how to create video
111
with AI.
112
I taught myself this and now I’m teaching
113
it back to you.
114
If I can do it, it’s accessible to
115
everyone.
116
Once upon a time, we needed a camera,
117
a crew, a big budget.
118
It would take weeks or months to create
119
a video, even years.
120
Now we can create a video, animation, advert,
121
social media video, anything you can think of,
122
all from your computer at home.
123
So let’s learn how.
— The 3 Key Methods to Create AI Videos: A Beginner’s Guide —
1
So, some of you may be completely new to AI video creation and some of you may have bad
2
habits or have been doing it in a completely different way, kind of stuck and wondering
3
how exactly is AI video made. So, there are three main ways to make AI video and in this
4
lecture we’re going to explore it. So, the objective is you’ll understand fully, we’ll
5
get into the three ways AI video are made as well as best practices. So, the objective
6
is by the end of the lesson, you will know exactly how it’s made, how you’re going to
7
go forward and how this course is structured to make AI video. There’s more than one way
8
to create everything, but let’s jump into screen and I’ll explain it for you visually.
9
So, AI video generation, the three ways are, if we pop up right here in the middle, AI
10
video, that’s our aim. We want to generate an AI video. So, there are three ways to do
11
this, text, image and video. And from all these ways, you can generate a new AI video.
12
So, with each of these, with text, you could input your own text or AI generated text,
13
put it into a text to video prompt and create an AI video. Now, we used to always say never
14
use this because yes, there is a lack of consistency with text to video, but now tools like VO3
15
allow these to be done really, really well. Yes, there is less consistency. If you are
16
text prompting, you’re going to have to prompt and describe your characters. So, they may
17
change slightly from one scene to the other, as opposed to using image to video, which
18
I’ll talk about in a minute. So, yes, for better consistency, image to video, but there
19
may be a case for using text to video. And I’ll show you some of these later in the course
20
with some new projects that I’ve made. Our main project here is going to be using image
21
to video, but I do have examples of text to video. If you’re interested in that, also
22
plus VO3 has automatic lip sync, which is perfect. And it also has audio in bill and
23
all this other great stuff. So, now text to video is an option, but consistency, if you
24
need it, perhaps it isn’t the option for you. Now, the second way with image, you can either
25
use your own image that you’ve created or taken or an AI generated image. And we’ll
26
show you how to do that. From that, you can say, turn this image into an AI video, image
27
to video. This is by far the best way to generate AI video. You have the most control. You can
28
have consistency across your characters, your video. And we’ll be teaching this primarily
29
in this course with loads of different AI image tools, generating that and showing you
30
how to do so in loads of AI video tools. Now, with video, there is such a thing as a video
31
to video. You could take your own video and then you could change what this looks like
32
or use it as a reference or use an AI generated video. Put that into the tools we’re going
33
to show you specifically. Runway is a great one. And we’ll show you how to turn video
34
to video. Or of course, that can go background. You could create an AI video in any of these
35
ways and use that to create video to video. Now, once you’ve got your AI video, multiple
36
things you could do with that, you could either upscale it. That’s making it better quality.
37
We show you how to do that at the end of the course and a little bit during the course
38
to with tools like Topaz. You could do AI effects. That’s effects from changing the
39
way this looks. And I’ll show you some different tools to be able to do that. You could regenerate
40
the AI video. I want another version of it. I want a longer version of that video clip.
41
I want the video clip to be longer and I want it to do something else, show something else.
42
Zoom, tilt, pan. We can do that. You could export that video and you could edit with
43
it inside your own software. And once you have your video, of course, you can add audio
44
to this to match it. Inside the AI video tools, we can do things like lip sync. I can show
45
you that inside Runway and also slightly better in tools like Agen or Pika. So there’s loads
46
you can do once your AI video is generated. But the best practice we feel for generating
47
is to generate your image with text to image to get the great image that you want. We show
48
you how to create that from that image, then make a video from that, generate a video from
49
your image and then either regenerate to get it exactly as you need it, export it, upscale
50
it and edit. So hopefully that cleared a few things up. If you are a little bit confused,
51
let me simplify this. You need an AI video. How are you going to get something to tell
52
the video to make it? Whether you’re telling it with text, make a video of a man walking
53
through New York City, whether you’ve created an image of a man walking through New York
54
City and tell the image to become a video or you have a video of someone else walking
55
and you tell the AI to make that video into a man walking through New York City. For example,
56
once you have that video, then I could regenerate it and get different versions of it or I could
57
export and edit it. I could make it into better quality. I could add some effects to it. You
58
can do what you want after you’ve created it. Now, I mentioned a few tools there while
59
we were looking into screen and none of these will make any sense to you. And don’t worry.
60
In the next lecture, I’m going to go over all the tools we’re going to learn in this
61
course. And these keep getting added to every single week, a month when more tools get released
62
or updated. I will update this course. Now, quickly, while you’re here, I can just jump
63
in the screen and share with you one more thing that will help clarify. So we have a
64
full understanding before we go forward. Now, in the next lesson, I will go over all of
65
these tools. So you’ll get a little bit more information about them. And we’ll also go
66
from left to right here, understanding fully a production schedule for making a video.
67
Just briefly, you come up with your idea and script, for example, for a video. You might
68
have to skip this section and I would teach you all these tools and more for doing so.
69
You then might want to come up with a style guide, images and a storyboard. These really,
70
really help when you’re generating images, coming to video to have a theme to make sure
71
there’s consistency. I will teach you all these tools, especially mid journey, but all
72
these for doing this. Then you’ll generate a video and I would teach you all these tools
73
with a concentration on runway and VO three. Then we’ll have a look at audio, making sound
74
effects, music, and I’ll teach you all these tools and then we’ll upscale this and make
75
this look beautiful. Really exciting stuff and glad to have you here on the course. Let’s
76
go into the next section. I’m going to tell you all the tools we’re going to be learning
77
exactly what they do. Just a brief overview so you fully understand. And then we’ll get
78
into working out exactly what an AI video workflow looks like. So how you piece this
79
all together to make an amazing AI video.
=== Section 2: AI Fundamentals ===
— Essential Tools for AI Video Creation —
1
So, there are a lot of tools out
2
there and they’re growing by the day it
3
seems.
4
I’m going to show you a lot of
5
tools inside this course and I will keep
6
updating this every week, month, when new tools
7
are released or updates.
8
So this becomes the one stop shop for
9
AI video because the tools that can work
10
for one person on a certain type of
11
project may not be what’s right for you.
12
So I want to show you as many
13
as possible and then you can decide based
14
on budget, based on need, which tools you
15
want to use.
16
And for those of you wanting to skip
17
through and not see all of the tools
18
that we have, do I get this question
19
quite a lot?
20
Here’s my top three, Midjourney.
21
This is probably the best image creation tool
22
I think out there and also now you
23
can create videos with less control than other
24
ones, but that’s probably going to update soon.
25
So image tool, yes, video, yes, but not
26
amazing, but it is probably a staple in
27
my arsenal.
28
Next is Runway.
29
They do have great video and they also
30
have image, not as good image I think
31
as Runway, but they do have good video,
32
great control over it.
33
And then VO3, which is super amazing quality,
34
probably with text to image, the best quality
35
out there.
36
It’s also the most expensive, but it is
37
all in one image, audio, lip sync, and
38
realistic.
39
So maybe not so rather than having multiple
40
tools.
41
Those are my top three.
42
I’ve covered them all here.
43
So if you want to skip through, go
44
to the Midjourney, Runway, and VO3 lectures and
45
you can see all about those.
46
So the objective for this lecture, let’s go
47
through all the tools that are inside this
48
course and why I’ve chosen them.
49
And then the outcome, you’ll know the tools
50
that are out there, what they do, and
51
which ones are best placed for you and
52
which ones you’d like to move forward with.
53
Now to make an AI video, you could
54
use as little as one, possibly two tools
55
to come up with an idea, make a
56
video all the way through to having a
57
video.
58
Or you could use as many as five,
59
eight, 10 plus tools to bring them together
60
to make the perfect AI video for you.
61
It all depends on what you want to
62
do.
63
So in the next lecture, I’m going to
64
show you different workflows, a really quick one,
65
and then a long one that we’re going
66
to go through on this course to get
67
what I think is the ideal production flow
68
for making an AI video.
69
But let me just quickly show you, let’s
70
jump in screen, about the different tools we’re
71
going to use on this course.
72
Now if you come to our site, AIvideo
73
.school, and scroll down, you can see there
74
are many tools that I’m showing you, we
75
teach you, and they continue and they grow
76
and go on and on and keep getting
77
added to.
78
Even this is not exhaustive.
79
But if I bring up the slide that
80
we looked at in the previous lecture, here
81
are some of the main tools that I’ll
82
be teaching you for each section.
83
So don’t worry about this process, this kind
84
of production route from idea through to images,
85
video, audio, and upscaling.
86
I’ll explain all the different ways you can
87
make AI video and the different workflows in
88
the next lecture.
89
But for generating ideas and script, we look
90
at ChatGPT, Copilot, Chatsonic, Claude, Scribbler, TextCortez, and
91
several others.
92
And then to do everything from create a
93
style guide, storyboard, and the all-important images,
94
the main tool that we use for that
95
is MidJourney, which I think is a market
96
leader in this space.
97
But we’ll also be looking at Photoshop for
98
generative fill, Gemini, Dali, Storyboarder.ai, Stable Diffusion,
99
Grok, that’s Twitter, or x.coms, MetaAI, that’s
100
Facebook, and more than this.
101
For video, the main tool we’re going to
102
look at is Runway, but also Pica, Hapier,
103
Kaiba, Hedra, InVideo, that’s something a little bit
104
different, BlumaDreamerScene, Kling, Akul, VO3, and even more
105
that’s on this list.
106
For audio, the main tool I like to
107
use is 11Labs, but I’ll show you some
108
others like Filmora that have some great versions
109
and tools inside their platform, as well as
110
Suno and Udio to make amazing AI songs
111
with and without lyrics.
112
It’s really, really incredible.
113
And then we’ll upscale this.
114
I’ll show you a couple of different ways
115
to upscale, even using MidJourney, but Topaz is
116
a market leader for upscaling, so I will
117
show you that tool.
118
All of these and even more.
119
Of course, you don’t need to use all
120
of these tools.
121
That would be a lot of tools to
122
be using, but when I show you each
123
section and what’s available in the marketplace, you
124
can decide which one you think you’ll get
125
on with the best for making your production.
126
Also, perhaps budget plays a role in it,
127
some more expensive than others, and you can
128
decide I want to make my video by
129
coming up with ideas with ChatGPT, which is
130
free.
131
I then want to generate images.
132
Perhaps I already use X and I’m going
133
to use Grok for that.
134
Perhaps I like using Meta.
135
Perhaps I want a really good paid-for
136
product, but I don’t need to spend that
137
much on a subscription.
138
You can look at MidJourney, for example.
139
Then you want to make videos.
140
Perhaps I’ll use Runway.
141
Perhaps I like Pika better or Luma Dream
142
Machine, and I’m going to get my sound
143
effects from Eleven Lab.
144
I’m going to make music for free with
145
Suno.
146
I’ll show you that.
147
Then I’m going to upscale with Topaz, or
148
perhaps I’m only making for YouTube and I
149
don’t need to upscale at all, so I
150
can save money and time there.
151
It’s completely up to you, and to understand
152
exactly how this whole process works, from going
153
from idea all the way through to a
154
finalized, edited, amazing AI video, we need to
155
understand what some workflows for AI video are.
156
So let’s talk about that in the next
157
lecture.
=== Section 3: AI Video Workflows (Production Process) ===
— Important Course Update Announcement —
1
Now just so you’re aware a little update lecture right here you’ll see in the course project if you
2
follow along when I’m making a project at the end of each section that I’m using Runway and Runway
3
since recording that has updated and they’re the latest version of 4.5 available right now so the
4
layout looks slightly different with image, video, audio and how to use it.
5
I’m keeping those in there where I create the project with an older version of Runway because the
6
principles are the same but the layout is changed slightly so in section 10 under video for Runway
7
I’m showing you the latest version and with all tools we do our best to update to the latest version
8
of the AI tool when they release.
9
So the project may show an old version of Runways I’m creating but I teach you and you see in
10
section 10 the latest version of Runway which looks like this where you can create image, video,
11
image to video, text to video as well as using lots of different tools that are available inside Runway.
12
Just to make you aware for that we have updated it and the latest version is available in section 10
13
where we cover all the AI video tools.
14
Thank you.
=== Section 4: Generating AI Video Ideas with AI ===
— Let’s Make an AI Video with 1 Tool FAST —
1
So, making AI video can be a long or a very quick process depending how professional you
2
want to get with it.
3
I’m going to show you throughout this course, every step of the way, how to make a really
4
professional AI video and you can choose and take away parts that you need for the projects you’re making.
5
But it can also be really, really quick.
6
Let me show you now, with this workflow, a really quick AI video you can make.
7
Now, actually, you could create AI video in just one step with one tool if you wanted to.
8
I’ll show you in the next lecture and then further on, there are better ways to do this
9
to get more consistency, like create your images and then turn those into video.
10
But if you wanted to, this could be done in one step.
11
Let’s come on to, let’s show you two tools here, MidJourney and VO3.
12
If I go, for example, let’s do a really simple prompt and we get into that in a bit.
13
Let’s do a man in his thirties walking down the sidewalk in NYC, in New York City, futuristic
14
cyberpunk. Let’s do that.
15
Now, I haven’t, don’t worry about typos too much, it understands.
16
I haven’t really given it too much detail, getting to prompting about what’s needed,
17
different styles about stuff like this, cyberpunk.
18
I get into that.
19
So let’s run this and let’s just see this really quickly.
20
And let’s also do the same thing at the same time over here in flow, text to video I’m on here.
21
So you can do this in one step and let’s hit run with that.
22
Now let’s come back to MidJourney and wait. Okay.
23
They’ve finished generating cool styles.
24
I like this one the most, cyberpunk with these neon lights here.
25
Let me open that up.
26
That looks really good. Okay.
27
So now I’m going to animate this and turn it into video.
28
I can do this automatic or I could go animate manually and I can tell it what I want it to do.
29
For the sake of this, I’m going to make it as easy as possible.
30
Let’s do low motion and let’s run that.
31
I will show you if you go to section nine, how to use MidJourney for images, section
32
10, how to use them for video.
33
I’ll go into this in loads more depth.
34
For now, I’m just showing you how quickly you can create video.
35
So right now that is generating a video.
36
We’ll see that with low motion.
37
So it should just move a little bit.
38
Let’s run back over to VO3 here. Okay.
39
VO3 is finished.
40
I obviously didn’t give it any details about what the man looks like, the camera angle.
41
Do I follow from behind, from the front?
42
I just let it run.
43
So let’s just play this for a second.
44
Let’s have a look. Really nice. Wow.
45
Look at the cars flying and I can hear audio.
46
There’s like futuristic space sounds as you walk past people.
47
Very realistic walk. Really cool.
48
Exactly the style that I said. Perfect.
49
VO3 is really good with realism.
50
And let’s go back to MidJourney now.
51
That’s just completed.
52
If I hover over them, you can see it moving. Great.
53
You can see the guy walking there, just a little bit of motion and the cars moving. Really nice.
54
If I click that, I can see it bigger. Perfect.
55
So I could download this and keep it.
56
And right there in one step, just with a text prompt, rather than give it an image, I’ll
57
show you that in the next section, next lecture also, you could just create it in one step.
58
I’ve now got my video.
59
Put these together, you can get yourself a movie.
60
In one step, I’ve managed to create the kind of moving image that I wanted to create here.
61
And that could be much better with a much better prompt when we get into that section. One step.
62
That’s how easy it can be to create AI video.
63
So you can see, make a few of these shots, put them side by side, and you’ve got yourself
64
a realistic video that would have taken you having to go to New York City, find an actor,
65
have camera kit, film, perhaps get permission to film somewhere.
66
We can do anything we want.
67
Drone shots, close ups, be anywhere in the world doing anything.
68
We just created that super fast.
69
So that was how you create a video really, really quickly.
70
Now, there’s a longer and better way to do this.
71
I’ll create another quick AI video in the next lecture, but we’ll use a few more steps
72
to make an even better video.
=== Section 5: Generating Amazing Scripts and Video Structures with AI ===
— Let’s Build a Better AI Video Together Using 4 Tools —
1
Now, in the last lecture, we used just
2
two tools to create an AI video and
3
it was a pretty nice video, but we
4
can make an even better one using a
5
few more steps.
6
Let me just show you some of these
7
and we’re going to do an even longer,
8
more in-depth version throughout the course to
9
create your really professional AI video.
10
Now, for example, for this workflow, and there’s
11
a whole section in a couple of sections
12
time on workflows so you can decide which
13
one works best for you, I could come
14
to chat GPT and I could say, produce
15
five ideas for a short two-minute video
16
about AI with just two characters and minimal
17
scene changes.
18
Now, chat GPT is making me some ideas,
19
AI past meets future, Professor H, an old
20
-fashioned scholar in a tweed suit, AI core.
21
And then here’s the idea, first ages research
22
and AI accidentally activates a prototype.
23
Okay, the debug debate, the AI interview, interviewer,
24
a curious journalist and an AI in human
25
form, a robot designed to look and behave
26
like a human.
27
That’s quite interesting.
28
AI therapy session, there’s a client, could be
29
a stressed out human and an AI therapist.
30
Okay, that’s really quite good, a busy professional
31
omnipresent assistant.
32
Okay, I like this.
33
I’m just going to copy this over here.
34
Let’s use Claude, which is perhaps one of
35
my favorite for generating script ideas.
36
I’m going to go create a one to
37
two minute script around this idea.
38
Note, I am using AI video tools to
39
generate the videos for this.
40
Now, the reason I do that is that
41
Claude is very astute to understanding what AI
42
tools have with capability.
43
So when it generates ideas, it’s actually pretty
44
good at understanding should or shouldn’t we have
45
very complex scenes, minimal scenes, things with less
46
movement, more movement, some of the limitations of
47
AI, which I’ll go through in the next
48
section.
49
We’ll look at some limitations of AI right
50
now.
51
So here’s my brief.
52
Here’s the script that’s made for me.
53
Client visibly more relaxed.
54
You’re surprisingly comforting.
55
AI therapist, thank you.
56
And a friendly reminder, our next session is
57
scheduled for next week.
58
Don’t forget to bring your human emotions.
59
They share a moment of understanding.
60
The client stands looking for that.
61
And that was the end of the scene
62
here.
63
If I scroll through, I can see the
64
entire thing here, a full scene.
65
Nice.
66
And it also gives me, I’ve crafted a
67
script that balances the serious topic of AI
68
anxiety with moments of light humor.
69
The dialogue explores human fear of technological replacement
70
while positioning AI as a supportive, complimentary tool.
71
The script maintains a warm, reassuring tone.
72
And then I can modify this.
73
No, this is fine.
74
This is great.
75
I can also go back to chat GPT
76
and say, generate me two ideal prompts to
77
use in mid-journey to generate images of
78
these two characters individually in this scene.
79
Now, I know I’m going to use mid
80
-journey for this, which is the AI image
81
tool we looked at last time.
82
So I don’t even have to come up
83
with my own prompts if I don’t want
84
to.
85
Although I have a whole session on prompting
86
and ideal ways to do this.
87
And it’s also on our site here, AI
88
prompting.
89
I have a whole site page dedicated that
90
I’ve been developing with you guys.
91
So you can know each tool and what
92
the ideal prompt for different tools is.
93
We’ll come to this later.
94
So here, there’s given me prompts for the
95
client in therapy office and the AI therapist.
96
So let’s just copy this in here.
97
Let me go in and prompt this.
98
Let me read over.
99
Shows the individual sitting on a comfortable couch
100
while a modern day therapist or client is
101
casually dressed, looking slightly disheveled, holding a cup
102
of coffee nervously.
103
Doing features, soft lighting, earthy tones, and a
104
sleek robotic device seated across from them.
105
Okay, I’ll just remove that.
106
It hasn’t told me the gender, age of
107
the person involved.
108
I’ll just leave it down to AI.
109
I’ll come back over and I’ll get the
110
therapist prompt here.
111
Futuristic AI therapist, a sleek robot human eye,
112
polished metallic, gracefully modern therapy, calm, expressive tones.
113
AI sits in a comfy armchair notepad.
114
Great.
115
Generate.
116
Okay, here’s my images.
117
I could regenerate and get some more of
118
these, but I think I quite like this
119
one.
120
This is quite nice.
121
Looks like, yeah, really nice.
122
Okay, what I’m going to do now, just
123
make sure all the toes are there with
124
AI.
125
The fingers are okay.
126
One, two, three, four, five, one, two, three,
127
four.
128
I’m going to use editor.
129
I’m going to, quite simply for this, fix
130
hands.
131
Let’s go back and check out our therapist.
132
So if I have this shot here, they’re
133
looking this way.
134
I like this one looking this way.
135
This is a nice one right here.
136
I like this as is.
137
I’m going to just edit and go holding
138
tablet, iPad, submit.
139
Okay, let’s go back and check out the
140
fingers.
141
No, almost.
142
Yes, this one.
143
Okay, I’m going to upscale that, get it
144
a nicer quality.
145
Let’s have a look at our AI therapist.
146
I quite like this one looking this way.
147
Okay, let’s upscale that.
148
This image nicely upscaled, download.
149
And our therapist too, download.
150
Now I was going to show you runway
151
right here to show you to turn those
152
images that we just made in mid-journey
153
into video so we can have consistency.
154
I can keep using the same character each
155
time.
156
But instead, the rest of this course and
157
the main project, I show you runway in
158
depth.
159
And things have updated too.
160
Things have changed.
161
So I’m adding this in to include VO3
162
here for consistency by adding those images.
163
Not runway.
164
You’ll see loads and loads of runway in
165
this course.
166
And that might be the tool you use,
167
but give you the option.
168
Here’s VO3.
169
So instead of text to image where you
170
saw me use in the last lecture, let’s
171
go up here and go frames to video.
172
Frames is another word for image basically.
173
And I can choose a start and end
174
frame if I want to or video.
175
But let’s go here.
176
And I’m going to now, I’ve already added
177
one here.
178
I’ll show you how I did that.
179
I go upload, select the image that I
180
want.
181
It tells me do I want to crop
182
it?
183
Yep, let’s crop and save that.
184
And now I’ve got my two images right
185
here.
186
So while this starts uploading, let’s just select
187
this one right here.
188
This was the therapist, the AI therapist, if
189
you like.
190
And now I can say things like, do
191
I want to…
192
Oops, I just actually hit that before I
193
explained to you what I said.
194
So I say very slow camera move in.
195
The therapist arms move.
196
Oh, that should just be move.
197
As if explaining.
198
And they say, welcome to the session.
199
And then hit run.
200
And it’s loading right here.
201
Now I can do the same thing if
202
I select the female who is getting a
203
therapy session.
204
And I can say, oh, that’s just loaded.
205
So the camera moves in very slowly.
206
She says nervously, yes, I’m ready, I think.
207
Now when we get later on in the
208
course, you’re going to see that we can
209
do things like tell them the accent, how
210
they say it, how quickly they say it,
211
with the kind of tone and things like
212
that.
213
We can prompt for all that kind of
214
stuff.
215
So we’ve actually got video from image and
216
speaking audio.
217
You can even have background music if you
218
want to, all in one prompt.
219
So let’s play that.
220
Now let me go back here and let
221
me show you this.
222
Let me play this.
223
Welcome to the session.
224
Camera moves and he speaks.
225
Then they keep talking.
226
So I would regenerate that because they just
227
say one line, but the mouth keeps moving.
228
So not an amazing first generation, but pretty
229
good.
230
The camera moves.
231
It’s very realistic in movement.
232
The image is slightly almost computer animated.
233
So that’s what the feel of this looks
234
if I got a far more realistic image,
235
which you might want to instead of using
236
Mid Journey, you could always generate an image
237
in here and use Imogen from Google to
238
do that.
239
So I’ve got that one image right there.
240
And then let’s turn the other one in
241
here.
242
While I’m waiting, I’m just going to hit
243
here to upscale that to 1080p.
244
And it’s going to say when it’s ready,
245
I can download it right here.
246
And now if I click this one, let’s
247
see my prompt member was camera moves in
248
very slowly.
249
She never says, yes, I am ready.
250
I think.
251
Yes, I’m ready.
252
I think.
253
Yes, that was really super.
254
I’ll put this together in a moment and
255
show you.
256
That was a British accent almost.
257
And it was a very realistic lip syncing.
258
So let’s download both these, upscale that.
259
Let’s put them together and they’ll look something
260
like this.
261
Welcome to the session.
262
Yes, I’m ready.
263
I think.
264
So you can see just from those two
265
images and back and forth that you could
266
actually create a conversation, a whole scene here
267
with that original script that I did earlier.
268
This was just an example showing you here.
269
And you could quite easily be making a
270
whole short films.
271
Films here, either using images you’re creating mid
272
journey.
273
You could create them in here, getting your
274
script and dialogue and prompts and everything from
275
AI and then using something like VO3 or
276
you’ll see runway later to create these whole
277
scenes.
278
You could go a step further.
279
You’ll see later in the course, we use
280
Topaz to upscale these.
281
These go to 1080 anyway, but if you
282
wanted it even higher quality than that, you
283
could do.
284
If you wanted a full start to end
285
process, you could do that.
286
But that’s the longer version.
287
If you want more consistency, as opposed to
288
the last lecture where we did text to
289
video where that’s harder to get consistency.
290
This is the longer version, but how you
291
can get consistent characters because I could now
292
for mid journey use this same character.
293
You’ll see me use something called Omni reference
294
where it keeps using the same character over
295
and over, or I could just keep using
296
the same image and prompt different conversation then
297
go back forward, back and forward, back and
298
forward.
299
But it’s how you create consistency within your
300
videos.
301
So this you can see is a much
302
more in depth workflow and we can go
303
even more and you’ll follow along in this
304
course learning all the different tools for each
305
step.
306
So you can choose which steps you need
307
and which tools you want to use for
308
doing this.
=== Section 6: AI Audio – AI Music generation and AI Voices ===
— The Automatic Way to Make AI Video… but… —
1
I’m just going to put this lecture at
2
near the beginning of the course here.
3
We come onto this tool in video AI
4
later.
5
I just want to bring your attention to
6
this because this is somewhat of a hack
7
and this is the main tool doing this,
8
but there’ll be more and more and we’re
9
getting closer to the real automation side of
10
just telling AI models, I want a video
11
of this, make it for me.
12
So we’ve seen people inside YouTube, for example,
13
make explainer videos and the main tool they’re
14
using is in video.
15
So I’m going to show you real quickly
16
here.
17
It’s not great and it’s not perfect and
18
it’s going to get better and better and
19
there’ll be more tools like this.
20
How quickly you can just go make me
21
a video about XYZ and it punch out
22
a video for you.
23
That’s not what this course is about.
24
This course is going to show you how
25
to have a lot more control.
26
You can do it this way if you
27
want to, but I’m going to show you
28
how to have a lot more control where
29
you can generate exactly the image and video
30
you want with the voiceover music you want
31
or with AI.
32
You could be using this in video and
33
then add some of your own shots to
34
put the two things together.
35
That’s fine.
36
But this is a quick hack on how
37
to use this tool to automate this completely
38
and we’re getting closer to having this actually
39
be probably the way most AI content is
40
going to be.
41
All YouTube videos you see in the future
42
that are explainers or mini docs are probably
43
going to be made like this.
44
I’ll show you this now, but this isn’t
45
what the course is about and we do
46
cover in video later in the course.
47
Now this tool in video is a complete
48
AI video hack.
49
If you don’t need any control and lots
50
of channels do this, I know they just
51
take information from a Wikipedia page, turn it
52
into a script with ChatGPT, put it in
53
here and spit out a video.
54
It can actually even be simpler than that.
55
If you want to, you can just create
56
in video, but you have a lot less
57
control, so this will only work for certain
58
styles of channels depending what it is you
59
want to make.
60
So let’s dive in here, I guess, in
61
video.io is the workspace here.
62
This is in video AI.
63
So I’ve got some options right here.
64
I can even describe everything.
65
I can start saying, create a short video,
66
explain a video.
67
So let’s go back to some of the
68
other things I was showing this video about
69
the history of North and South Korea.
70
Let me just actually take that right here.
71
Let me go into in video.
72
Let me create an explainer video.
73
Now I could go into ChatGPT and I
74
could say, write me a two minute script
75
for a YouTube video about the history of
76
North and South Korea.
77
And I could spit this out.
78
I could have this.
79
I could copy and paste this if I
80
wanted to and put it into here, but
81
you don’t even actually have to do that.
82
So I’m going to say, create a two
83
minute video about the history of North and
84
South Korea.
85
So this is educational, I don’t want it
86
to be skewed in any way.
87
So use only generated clips.
88
I could use, this is where, so I
89
can, it depends on the package that you
90
have, and I’ll explain that when I go
91
over this tool more later.
92
I can use only stock media, only generated
93
images, only generated clips.
94
Let’s do generated clips.
95
Do I want it to be a background
96
music?
97
Yes.
98
Like the background music to be energetic?
99
No.
100
To be inspirational, let’s say.
101
Okay.
102
Do I want it to be subtitles?
103
Yeah.
104
I would like to, don’t need any subtitle,
105
add bulb subtitle with a popping effect.
106
Yeah.
107
I want it for this video.
108
Do I want voice actors?
109
Let’s say use a male voice who is,
110
so if I was targeting this at children
111
for educational, I can say a clear American
112
voice for the narrator.
113
Let’s go style.
114
I don’t want Disney Pixar.
115
And so I’m going to take away style
116
language.
117
The language should be English.
118
Music preference, perhaps.
119
Use the best audio available.
120
Use YouTube audio library only.
121
This means none of the music will be
122
copyrighted and I can use it for my
123
YouTube videos so I can make money from
124
this basically.
125
And that’s it.
126
Okay.
127
Let’s continue.
128
Yep.
129
Yep.
130
Yep.
131
Yep.
132
Yep.
133
And it can be as simple as this.
134
Let’s generate video.
135
So after it’s done analyzing for a bit,
136
it’s going to ask me some more questions.
137
I’m using generative media, not stock.
138
Visual style.
139
Do I want it to be realistic, animation,
140
or anime?
141
Let’s do animation.
142
No, we did animation last time.
143
Let’s do realistic and see how well it
144
does.
145
Audience is for students.
146
Yes.
147
Platform YouTube.
148
This is because it’s going to be a
149
different format, 1.1, 9.16, 16.9.
150
Let’s continue.
151
Now, after 10 minutes or so, I can
152
see there’s a preview here that’s going to
153
show me in 480p.
154
I can download this into higher definition.
155
I have not seen it.
156
I’m going to play it with you for
157
the first time and let’s have an honest
158
review.
159
Okay, dudes, picture this.
160
It’s 1945 and Korea is finally free from
161
Japanese occupation.
162
Yay.
163
But wait, things are about to get complicated.
164
The US and Soviet Union decide to split
165
Korea at the 38th parallel.
166
The North goes to the Soviets and the
167
South to the Americans.
168
This made both sides really mad.
169
Fast forward to June 25th, 1950 and boom,
170
North Korea invades the South.
171
The Korean War begins and it’s a mess.
172
After three years of fighting, millions of casualties,
173
and a whole lot of uh-oh moments,
174
the war ends in a stalemate on July
175
27th, 1953.
176
After the war, North and South Korea went
177
their separate ways, like big time.
178
In the North, Kim Il-sung established a
179
communist state with a personality cult that would
180
make even Stalin go, dude, that’s a bit
181
much.
182
Meanwhile, in the South, they were all about
183
that capitalist life.
184
South Korea went through this crazy economic boom,
185
known as the Miracle on the Han River.
186
They hosted the 1988 Olympics, became a tech
187
powerhouse, and even gave us K-pop.
188
You see, while North Korea was busy with
189
its nuclear program and making everyone nervous, South
190
Korea was out there becoming the world’s 10th
191
largest economy.
192
Talk about different priorities.
193
Okay, so I can obviously give it more
194
commands here to edit this.
195
That wasn’t bad for waiting for 10 minutes,
196
and those images you saw were AI generated.
197
None of those were stock, so there’s obviously
198
a little bit of morphing, like the K
199
-pop girl at the end morphed a bit,
200
and so did one of the soldiers at
201
the start.
202
But if you want to create simple videos,
203
yes, his language was a little bit too
204
…
205
I said educational for YouTube, and it was
206
very relaxed, informal, colloquial language like, hey, dudes,
207
and stuff.
208
But I did choose American …
209
I just chose American clear.
210
Well, I would just keep regenerating and playing
211
with this and editing it if you wanted.
212
You can see how in 10 minutes, that
213
was all AI generated footage, and it did
214
a script for me, and told me the
215
story of the Koreas, and the war, and
216
what happened to them economically, et cetera.
217
So in 10 minutes, I could do this.
218
I mean, I would absolutely do this again
219
and again, and get the voice right, the
220
tone right.
221
But once you have one perfect, you could
222
just regenerate videos for the same channel time
223
and time again automatically.
224
As it gets better and better, this is
225
going to be a game changer, and you’re
226
going to be able to do fictional stories.
227
You’re going to be able to do …
228
Well, you still could now, but maybe I’ll
229
do that later.
230
It’s going to be an absolute game changer.
231
This is AI cheating.
=== Section 7: AI Video Style Guide / Moodboard ===
— Hack! A Quick Way to Prompt & Work Faster with AI Video (Whispr Flow) —
Now I just want to interject with a tour that’s going to really help you here with prompting.
Now, the thing with prompting, if I come over, I don’t know, to something like flow here, where
I’m using Google Vo three prompting for something like this, I don’t know.
Let me, let me write this out.
1980 A 16 year old boy, white brown hair in a rough neighborhood in Los Angeles is refilling magazines,
etc., etc., etc. the magazines have females on the cover.
It’s women’s magazine in a rough neighborhood.
He’s looking around to make sure no one watches him as he feels it.
It’s meant to be, fills it as he fills this magazine box.
So the problem with doing a prompt like this is that it obviously takes a long time to sit there, sit
there and type this going to take ages to do that.
So, uh, there’s a quicker way to do this.
And it’s something I’ve been using just recently, quite a bit.
There are many tools, but the one I like to go to is Whisper Flow w I.
ISP, our AI.
You can go to this and it’s free.
You can use this for free.
There are limits to the amount it can be used on free.
But if I go to pricing, you can see that how much the monthly cost?
You’ll have 2000 words a week on Mac, Mac and and windows.
A thousand words a week on flow for iPhone and then $12 a month and stuff.
So not a really expensive package, but free will probably cover most of you.
And then you can do something really simple like this.
Just install it and follow the online instructions that he gives you.
It’s super simple to do and he’ll tell you where to allow permissions, etc. on your machine.
It took me, I don’t know, two minutes to set it up.
And now when I come in here like this, I can just go to my text prompt box here.
I can hold down the FN key, which in the corner of my keyboard for a mac, and I can say something
like female, aged 25, white with red curly hair.
She is wearing sunglasses and looking directly at camera.
In the background is the backdrop of Los Angeles.
The year is 1950 cinematic shot daytime and I can let go.
And it fills it for me.
And I can hit run.
You know how long that would have taken me to type that and then do it, redo it, redo it, redo it.
Obviously I can before I go in, I can check it and I can before I hit enter here.
And I can also now rerun this and I can do it again, and I can even come in and say, oh no, I don’t
want that.
I can hold down and I can say blonde.
Oh, and I think I put it here and I can say blonde.
So I just obviously did that wrong there.
Let me just do it again.
I can just hold this down now and say blonde let go.
And it replaced it to blonde.
So you can just be using that, which is a super quick way to prompt when you’re making mini movies
or lots of scenes, and you have to prompt, prompt, prompt, and you want to get dozens of shots.
Imagine each one takes you a minute or two to write to type out, and you’ve now made that 10s you’re
saving yourself.
Perhaps a minute and a half per.
If you were doing ten, then you just saved yourself 15 minutes.
And if you would do 30 to save yourself 45 minutes inside.
Just holding down a key with a free tool over days, weeks, months, you’re doing this.
You’re going to save yourself days in time, a ridiculous amount of time saved.
Oh, and here is the generation, the first ones done here.
Let’s have a little look at it while we’re here.
Why not?
That’s playing hair blowing in the wind.
That’s nice.
Looks like Emma Stone.
Uh, a little bit.
And then this one here.
Yeah.
Another one.
Los Angeles in the background.
I like this one better, I think because this sign is skewed.
Nice.
Really nice.
So do save yourself time.
And it’s not just when you’re prompting here.
Obviously.
I could just go over to here and I could highlight I could hold it down and I want to dictate my URL.
I want to dictate typing in notes, anything you want it for.
Super handy, super handy tool.
Please use it to save yourself some time.
Okay, I’ll see you on the next lecture soon.
=== Section 8: Storyboarding (your video layout) ===
— Your Ultimate Course Workbook & Resources —
1
Now, I just want to draw your attention. Before we begin, I want to show you that there’s
2
a course workbook underneath all of these lectures. There will be an individual download
3
if there is one for that lecture explaining what it is we’ve gone through in a written
4
down format, or there’s the entire course workbook that you can go through, which is
5
like an AI video school publication, a whole tutorial that you have access to having taken
6
this course. So if you want to follow along with the step-by-step guide for this in download
7
form, print it out, have it as a book if you want to, then you have access to that within
8
this course. So go ahead and download that if you want to follow along or get them up
9
on your phone or laptop whilst you’re doing this course. It’s completely up to you.
=== Section 9: AI Image Generation ===
— Course Settings & Support —
1
If at any time you have any queries or questions about this course, I want to draw your attention
2
to our page, AI video dot school. There is an FAQ section down there, or if anything
3
is unanswered or you have any real concerns or queries, then of course, get in touch on
4
our site or wherever you have taken this course, whichever platform you’re able to contact
5
the platform directly. For example, if you have any problems with playback or anything
6
like that, then contact them directly. If you have a query with us or you just want
7
to share with us some work or anything like that, I love to see the work that students
8
are doing. Then, of course, contact us, send us a link. I would love to have a look. And
9
if you want to give my feedback now, some other things also, you can slow down, speed
10
up this. Sometimes I get very excited and I talk a little too fast, especially if English
11
is not your first language, then you can slow me down. Some people like to speed me
12
up because they don’t want the course to take too long and they want to just get through
13
the lecture. You have those controls inside here on the course. Okay, let’s get into some
14
AI fundamentals.
=== Section 10: How to Make AI Videos ===
— Get Access to Your Course Pages Here —
1
Now, just a quick lecture to show you where your links are for the course pages. You’ll
2
see me inside the course, show some pages. And I want to show you how to get access to
3
them people were having trouble accessing. So I’ve made it super easy for you guys. If
4
you come to just AI video dot school, then you’re going to come to the promotional page,
5
which obviously you already have the course. So the easiest way to do this, in the next
6
lecture, there are links in an article you can click through, make sure you have the
7
HTTPS dot dot forward slash forward slash, otherwise it might give an error return. But
8
if you’re already on AI video school, just do forward slash course pages. And remember,
9
it’s HTTPS dot dot forward slash forward slash AI video school, of course, pages. Now,
10
if I go on to course pages, here is the simplest way you can just click through to AI prompting,
11
you’ll see coming up on screen as AI video dot school AI prompting, but make sure you
12
have the HTTPS if you’re coming to these links directly, just like this. But the easiest
13
way is to come over to course pages. And here you can just click through for the different
14
pages that I’m mentioning on the course, all the links are in the next lecture, there’s
15
just an article to click through. But again, make sure you have the HTTPS. And you can
16
go through to those pages that are for you guys to follow along with quickest and easiest
17
way is just access it with these buttons. Also, after signing up for this, you will
18
have received a welcome email from Udemy for this course. And the links are also inside
19
there. But the easiest way is with the buttons as I mentioned. Okay, I will see you as we
20
continue on with the course. See you in the next lecture.
=== Section 11: AI Sound Effects ===
— Links (Course Website Pages) —
[No transcript available for this lecture]
=== Creating a Mini-Movie with AI (text to video) ===
— The Foundation of AI: Get Your Prompting Right —
1
On to AI fundamentals.
2
In this section, we’re going to go over
3
some of the backbone knowledge that you need
4
regarding AI.
5
Please don’t skip through this.
6
You’re going to know some of it probably
7
if you have an interest in AI, but
8
there’s going to be something in every lecture
9
that you haven’t covered yet.
10
And at first, we’re going to talk about
11
prompting.
12
Prompting really is the be all and end
13
all for all AI, whatever it is you’re
14
trying to do regarding AI.
15
So you could be trying to get information,
16
you could be trying to get an answer
17
to a question or generate text or like
18
this course concentrates on generating images and video
19
or the direct video or direct an image.
20
Prompting matters and how you prompt the structure
21
of your prompt depending on the platform and
22
the tool you are using really does matter.
23
But I know that sounds like a lot.
24
Don’t worry, I’ve broken it down systematically step
25
by step and really, really clearly.
26
So the objective of this lesson is to
27
understand exactly what is prompting and how to
28
do it.
29
So the outcome by the end of it,
30
you’re going to understand for what platforms you’re
31
using, what style of prompting and what’s needed
32
to get the best results and best practices.
33
So let’s jump on into the screen.
34
And I want to show you some bits.
35
Now you’re going to see by the end
36
of this lecture all about the different prompting
37
methods for different platforms.
38
I won’t go into all of them in
39
depth, this would be a huge lecture, but
40
I’ll give you the key information and where
41
you can find more.
42
Now on the slide, I mentioned over on
43
the right hand side, there’s a prompting hierarchy
44
here.
45
If all else fails and you don’t want
46
to go into in depth about each of
47
the different tools and how you best placed
48
for giving a prompt for that tool, then
49
concentrate on the prompting hierarchy, shot type, the
50
style of that shot, the subject and what
51
they are doing, look like the mise-en
52
-scene of the shot and then extra details.
53
But you can check all this out on
54
site.
55
I’m going to share this with you and
56
go through it now, loads more information that
57
you have access to as a student of
58
this course.
59
We delve deeper into AI tools specific.
60
Check it out at AI video dot school
61
forward slash AI hyphen prompting and you’re going
62
to come to this page right here.
63
So we have a little definition of exactly
64
what a prompt is.
65
And in a nutshell, it’s an instruction you
66
are giving your AI tool you are using.
67
Obviously, think of it as direction.
68
And as mentioned, not all prompting is the
69
same depending whether you’re using a text based
70
AI model like chat GPT and you’re looking
71
for a question to be answered or in
72
our case, looking to generate scripts or structures.
73
It’s going to be different from the way
74
you can have a conversation with those kind
75
of platforms back and forward, as opposed to
76
AI art generations like mid journey, we’re going
77
to use or runway where it’s far more
78
instructional and the way you give commands in
79
prompting is different.
80
So what I have here, if you go
81
just down the page here, these are the
82
main tools and I’ll probably add more to
83
this page as you go on.
84
So it might look slightly different.
85
But if you click any of these, you
86
can actually go on to this is mid
87
journey and they talk about what’s needed for
88
the specific prompts.
89
So they break it down quite nicely that
90
if you’re instructing it with a set image,
91
put it first, then your text prompt and
92
then any parameters, that’s the size or shape
93
of the image that you want.
94
And they give some great prompting notes here
95
that you can go in for.
96
This is really nice.
97
Try to be clear about the context.
98
Think about subject medium.
99
Is it a photo, a painting, the environment,
100
indoor, outdoors, nighttime, etc.
101
Lighting.
102
What’s the lighting like?
103
The color is vibrant, muted.
104
You can think about all kinds of shots
105
you’re using and we go on to styles
106
in the next lecture, which is going to
107
go into more about that.
108
The mood to is it cinematic, moody, dramatic,
109
calm and the composition.
110
Are we close up over the shoulder, etc.
111
So these are really nice official documentation.
112
That’s for mid journey.
113
There’s a similar one here for runway, which
114
we’re going to be using for making video.
115
So all of this is pretty alien to
116
you right now.
117
These tools that I’m showing you, but that’s
118
okay.
119
It will become obvious soon.
120
Runway, like I mentioned, not conversational.
121
So give instructions and has some nice details
122
about how you should be doing that on
123
their page.
124
And in exactly the same way, chat GPT
125
here, there’s a link which has a really
126
nice instructions there on how to use prompting
127
in chat GPT.
128
But what we have on this page right
129
here, and we’re going to do some examples
130
in a moment.
131
It won’t just be me telling you how
132
to prompt for each one.
133
I have divided this down into prompts for
134
screenwriting and idea generation for videos, prompts for
135
image generation, and then prompts for video generation.
136
That’s if you’ll text a video or if
137
you’re going to give a prompt alongside an
138
image for video, and then prompts for audio.
139
So if I was to go into any
140
of these, let’s just grab some right here.
141
I can look at Gemini, it’s going to
142
give you how to use it differences between
143
this and the other platforms and also ideal
144
prompts.
145
This one’s creating a script, this one’s for
146
creating a structure, and you can see how
147
these are laid out.
148
You can go along and do this for
149
all the AI tools we talked about inside
150
this course and more.
151
Same with image prompts right here.
152
Mid journey, it gives you how to prompt
153
differences, some ideal prompts, and we’ll run some
154
of these in a moment.
155
The same for all these tools here for
156
generating images, and then for video, it’s the
157
same.
158
Here’s some great prompts and some examples, what’s
159
needed for each one, as well as all
160
of these have the link to the website
161
for these tools.
162
And then we finish here.
163
Sumo is one of my favorite audio generation
164
tools.
165
It’s really good.
166
You can get whole songs generated, even tell
167
it what to create a song about, if
168
it’s going to be funny in the style
169
of country.
170
Oh, it’s really, really good.
171
We’ll get into that later in the course.
172
So let’s run some of these prompts, I
173
guess.
174
But what you could do if you wanted
175
to this early on in the course is
176
you can go over to this page and
177
you could just flick through these and start
178
familiarizing yourself, at least with the names at
179
this point.
180
You won’t need to start running some of
181
these prompts yet.
182
But some of the names so for script
183
writing, we’ll use chat GPT, Gemini, Claude, Copilot,
184
Perplex, T-Squibbler, Chatsonic, TexCortez.
185
The images, Midjourney, Dali, Gemini, Adobe Firefly, Runway
186
Image, Meta AI and Grok.
187
And then for video here, there’s more than
188
this.
189
And this may be added to by the
190
time you look at this page, Runway, HPR
191
Picker.
192
And then for AI audio, we’re going to
193
look primarily at 11 Labs and then also
194
Sumo and a couple of others also.
195
So start familiarizing yourselves and you can go
196
and check out the sites if you want
197
to, which are always listed at the bottom
198
of each of the drop down sections.
199
Great page, full of information, go and check
200
those out.
201
Once again, one of the pain points we’re
202
trying to learn AI is having everything in
203
one place, understanding what’s needed.
204
Prompts are so important.
205
So this page is really nice to have
206
all of these tools in one place.
207
So let’s run some of these prompts while
208
we’re here, shall we?
209
You want to see that.
210
Don’t you need to hear me talking about
211
it?
212
Let’s do a script writing prompt right now
213
for chat GPT.
214
I’m actually going to take this.
215
Let’s do this.
216
Let’s copy this script writing.
217
So create a three minute video script in
218
a style of a crime thriller.
219
The main character, an investigative journalist, discovers a
220
hidden conspiracy in a small town.
221
Include a dramatic confrontation and a cliff to
222
hang out.
223
Okay, so let’s just go over the chat
224
GPT quite simply, copy, paste that in and
225
let’s let that run.
226
Okay, title, the silent truth, opening scene, small
227
town secret, camera pans over, voiceover, main character.
228
They say small towns hold fewer secrets, but
229
in reality, they bury them deeper.
230
Okay, cool.
231
So you can see this is listing out
232
scene two here, scene three, and it’s just
233
a three minute, it’s still generating, a three
234
minute script.
235
So we’re getting a fuller script here with
236
characters, dialogue in the style of what I
237
asked for.
238
It’s got a cliffhanger look, really, really good.
239
You can see how that prompt that we
240
gave in here, given it enough information, it
241
generated this.
242
Now I could just say, write a three
243
minute script in a style of a crime
244
thriller.
245
Okay, let’s just run that and just compare.
246
Whispers in the dark.
247
So it’s still going to give me a
248
script, but there’s no way it’s going to
249
be able to align with the thoughts or
250
the ideas that you had, because you haven’t
251
given it enough information.
252
I could run this multiple, multiple times and
253
get a different result, and I’m not giving
254
it enough information to align with what I
255
needed.
256
And just to show you that, we’re going
257
to chat GPT way more.
258
We’re going to run some scripts in the
259
next sections.
260
I could run this when it finishes.
261
See, it’s remembered for my previous prompts here.
262
It’s got a cliffhanger on here, although I
263
didn’t ask it to.
264
I could rerun it right here if I
265
want to.
266
So that was chat GPT.
267
Let’s create an image, shall we?
268
Let’s use mid journey for that.
269
I’ll scroll down to mid journey.
270
Let’s give it a prompt.
271
Okay, let’s take this one.
272
Generate an image of a lone figure standing
273
on a cliff overlooking a misty forest at
274
twilight.
275
Use a moody, ethereal color palette, deep blues
276
and purples, and highlight contrast between the figure’s
277
silhouette and the soft glow of the rising
278
moon.
279
Nice.
280
You can see, as we mentioned, it’s giving
281
it, and we saw this in mid journey
282
when we looked at their site.
283
It’s doing things, talking about the color, the
284
mood, the figure, what’s in shot.
285
So let’s generate that right here.
286
Once again, in the image creating section of
287
the course, we go way more into mid
288
journey.
289
You won’t have seen this page.
290
It’ll look very alien to you.
291
That’s fine.
292
I’m just going to run that and then
293
let’s see what it spits out.
294
Okay, nice.
295
Really good.
296
So it has generated these images like this.
297
I quite like, oh, this one’s quite nice
298
when it’s got the white wrapping around the
299
outside.
300
That’s more realistic.
301
This one’s more almost like an animation, these
302
two.
303
And that’s fine.
304
I could have said something photorealistic, cinematic, if
305
I wanted to.
306
We’ll talk about that more later.
307
So I’m going to download this one right
308
here, and I’m going to explain why shortly.
309
So now let’s generate some video.
310
Now, I’m not a fan of using text
311
to video.
312
We’ve explained all the different ways you can
313
generate video, but I’m going to show you
314
an example right here of prompting text to
315
video.
316
Let’s use runway here.
317
So prompting for text to video.
318
Once again, I like image to video, way
319
more controlled, but we can test it right
320
here.
321
Let’s generate this by putting in the prompt,
322
generate a video showing a skateboarder performing tricks
323
at an urban skate park at sunset.
324
The camera should pan to follow the skater’s
325
movements with a smooth transition between shots.
326
Add a lens flare effect as the sun
327
sets behind the skater during a jump.
328
Okay, so I’m in here.
329
I’m in Gen 2 version.
330
It will all be explained later.
331
I’m just going to paste that in right
332
here and generate that.
333
And let’s sit and wait while this is
334
in the queue.
335
Okay, let’s play this, and I’m going to
336
show you why I’m not a fan.
337
So this, I have to use this version,
338
and I could have used another tool.
339
Actually, I’m going to do that while we’re
340
waiting.
341
Let’s use Luma.
342
I’m going to pop that same prompt into
343
here, and let’s see what that brings us.
344
While I just jump back, watch this.
345
Absolutely terrible.
346
This is not how you generate a good
347
quality video.
348
I’m going to show you how you can
349
do that by using the image to video
350
and the difference you’re going to get with
351
that.
352
Let’s jump over to Luma and see what
353
a different platform did.
354
Okay, Luma’s just been dreaming this up.
355
Let me show you that.
356
Okay, it’s better than before, but look at
357
the legs.
358
Look, the floor just changed.
359
I’ll show you in some limitations lectures in
360
a couple of minutes time, if you keep
361
watching the course, how this is an issue
362
with AI video.
363
But we can get it much, much better.
364
Please don’t use text to video.
365
Let me show you what you should be
366
doing.
367
So what you should be, if I stay
368
underneath runway here, image to video, using the
369
image of a neon, let’s see this game,
370
generate a short video sequence with animated raindrops
371
falling, reflection, the pavement, neon lights, flickering, et
372
cetera, et cetera.
373
So remember, I downloaded here from mid journey
374
earlier.
375
Let’s go back into runway, but this time
376
I’m in the Gen 3 version.
377
What I’m going to do is just drag
378
that image into here, let that run, and
379
then I’m going to give a prompt.
380
I’m going to be just really simple here,
381
and we’ll get more into prompting with video
382
later on.
383
But if I just say, zoom in slowly
384
to figure out, no, just zoom in slowly.
385
Let’s just leave it at that.
386
I only need a five second clip.
387
Let’s generate.
388
And while I’m waiting to show you to
389
get all the way around this, let’s generate
390
ourselves the image of that skateboarder.
391
So we did this text right here, generate
392
a skateboarder.
393
Let’s copy that.
394
Let’s go back into mid journey.
395
Let’s paste it.
396
And let’s say generate an image of a
397
skateboarder performing trips and urban skate park at
398
sunset.
399
Okay.
400
And then I was like, add lens flare.
401
So let’s remove that.
402
Let’s run that prompt.
403
I will jump back in and check out
404
the results of that image to video that
405
we just ran.
406
Let’s do this.
407
Okay.
408
And it’s zooming in nicely.
409
Far more fluid, far real.
410
The mountain doesn’t change shape suddenly like we
411
saw with text to image.
412
Amazing.
413
Now to really see how well image to
414
video does compared to text to video, let’s
415
grab the results here.
416
Okay, nice.
417
Let’s do, I like this one.
418
So let’s drop that back in here.
419
And I’m just going to say skate boarding.
420
Once again, we’re going more into depth about
421
how you should be prompting for this later.
422
So let’s generate that and let’s wait and
423
see the results.
424
And we’ll compare exactly what text to video
425
did compared to image to video with pretty
426
similar images and direction here.
427
Now that’s generated here.
428
Let’s see this.
429
Okay, not perfect.
430
Much better than before.
431
Yeah.
432
And that’s the first generation.
433
You’re going to see that when we are
434
developing videos, we would run it multiple times,
435
give it more instructions, etc., etc.
436
We’d also create the limitations.
437
We’ll talk about that in the next couple
438
of lectures.
439
So these were the difference between prompting.
440
Please, I would suggest go over to this
441
page that I mentioned on our site for
442
prompting and just familiarize yourself at first with
443
some of these tools and the ideas behind
444
prompting.
445
So we can see the concept of prompting
446
is vast and all prompts will bring a
447
result.
448
You could just put video of a dog.
449
You could just write the word dog.
450
There will always be a result, but the
451
way you prompt and the style in which
452
you do so and what’s included matters.
453
And we’ll see this as we get on
454
with the course.
455
Just familiarize yourself with prompting somewhat.
456
In the next lecture, I want to talk
457
about styles.
458
That’s because you are going to want a
459
certain style to your images and video.
460
So maybe you want it black and white
461
and moody.
462
That’s film noir.
463
Maybe you want it in cyberpunk style or
464
a documentary style or in the style of
465
a director like Wes Anderson, for example, is
466
a popular one.
467
But you don’t know all the styles, obviously.
468
So in the next lecture, I’m going to
469
go over and show you a page that
470
I’ve developed showing you vast number of styles
471
that you can be using within your prompting
472
to get the best results.
=== Recreating AI Viral Trends from YouTube and Tik Tok ===
— Speak the Language of AI: Styles, Shots, and More —
1
Styles are important with AI video, really important and not discussed enough. When you’re
2
trying to generate an image or video, you probably have a style in mind from something
3
that you’ve seen elsewhere in the past, obviously. And trying to communicate the style that you
4
want for your image or videos with AI can be really frustrating if you don’t have the
5
knowledge and the terminology for certain kind of styles. So the objective for this
6
lecture is to go through some key styles that you might be wanting to use for AI, although
7
these are endless, of course. So the outcome, you’ll have this terminology for certain styles
8
to be able to use it in your prompting with AI to get the styles that you want. Now, you
9
don’t need to know all of these styles. Obviously, I could just generate an image. And if I know
10
I wanted something kind of dark and moody, I could say, and we’ll do some prompting in
11
a moment and I’ll show you, I want a kind of gritty nighttime feel with some neon signs.
12
And I could say that into, say, mid journey to generate an image to turn to video. But
13
if I know that that style is called cyberpunk, I could instantly say, I want a cyberpunk
14
image of and then describe my scene and quickly, more quickly and accurately get the kind of
15
image that I want. Not to get too scientific about it all, but AI has been trained and
16
learn on models, videos, imagery, etc. out there that knows what cyberpunk is based on
17
what is learned before. So if you know what these styles are, then it’s going to be far
18
quicker for you to get the results that you want. So let’s learn some. Now check it out
19
on site. I’ll show you and we’ll jump into screen in a moment. You have access to this
20
being a student where I go through over 30 different stars and things you might want
21
to use and show you where you can learn some more. But some of the most common ones that
22
people want, I’ll put on the slide right here, film noir. You can say, I want it black and
23
white, gritty, old 1930s style or a bit like Blade Runner, which is a mix of film noir,
24
cyberpunk. Really, you could say that, but the term is film noir to get shots like this.
25
Black and white, really eccentric lighting, backlit silhouette, these kind of images.
26
That’s called film noir. Or you could have seen some other films online. I want something
27
very colourful and symmetrical, kind of eerie, strange looking. Well, there’s a director
28
called Wes Anderson and Wes Anderson often has shots like this. Symmetrical people are
29
put dead in the centre of shot with bright, colourful backgrounds, wide angle shots. If
30
I tell the AI model, I want a shot in the style of Wes Anderson, I can be getting shots
31
like that far faster and more accurately for the image you have in your mind. Or perhaps
32
I want, a lot of people call this cinematic because on an iPhone, that’s what they’ve
33
labelled it. But I want that blurry background. That’s called a shallow depth of field, meaning
34
what’s in focus. For example, if I hold my hand up right here, you can see that my hand
35
is in focus and the background is blurry. And then it will come back onto me right here.
36
It’s a shallow depth of field. Those are just some key examples. Let’s jump into screen
37
and I’m going to show you some more styles and things you could be using. And then let’s
38
generate some prompts to see how these work. So on this page I’ve created for you right
39
here, an AI.video school styles. You’re going to see here, let me talk to you about these
40
prompts. So whether you want the moody, symmetrical compositions of Stanley Kubrick or the
41
vibrant whimsical aesthetics of Wes Anderson, like I mentioned before, you can elevate it
42
by creating the atmosphere based on these styles. This is what you need to give AI.
43
So let’s talk about some styles right here. Here you can see this image created for Mid
44
Journey as a cowboy and sci-fi. We’ve got Western and sci-fi meeting in one image. Quite
45
a cool image. So let’s go over some of these and then let’s start to generate them here.
46
Once again, in Mid Journey, we’re going to go over this in depth. I don’t expect you
47
to know the ins and outs of creating images with Mid Journey and what all this means yet.
48
We will get there, but let’s look at some of these film styles. So here on this page,
49
you can see I’ve added probably all the main ones you’re going to want, but the kind of
50
styles that you could be using and people that you are inspired by. This list is endless
51
and personal to you, but let’s start with some of these. Film noir, 1940s, 50s, characterized
52
by that dark, moody kind of visuals. You’ll see these in movies like one of my favorites,
53
The Maltese Falcon or Double Indemnity. Those kind of films really use this style, often
54
gangster type movies. Or the Western, we all know the Western frontier, often this more
55
sepia kind of tone here, beige tone against blue skies, often very wide angled shots using
56
a wide lens to connote the vastness of the expanse of the West. And some good examples,
57
The Good, The Bad, The Ugly, Unforgiven. Moving on to science fiction, obviously you could
58
look at Blade Runner, which uses a few different styles in it, or Space Odyssey, Star Wars,
59
etc. For this kind of sci-fi feel, which isn’t just going to be in the style that we’re looking
60
for here, maybe in color and things, but in the mise-en-scene, what’s included in the
61
shot and the feel that you have. Now, some other ones that aren’t used as much that I’m
62
a big fan of, Melodrama. Now, Melodrama was popular in the 1950s. It’s this kind of bright
63
color. You see we’ve got this turquoise punch here, yellow here. And in movies, Douglas
64
Sirk was a big director for this. Some of his movies really utilize that. You’ll have
65
one wall in one color and another in another color. Kind of in the birth of this, not long
66
after color movies obviously became apparent and huge in Hollywood, really playing with
67
the color and what this means with regards connotations on shot and the way he’s lighting
68
characters and scenes. Really, really nice style. And it really sets the tone for maybe
69
1950s that you want. Now, these two are linked. It’s this 1980s aesthetic, which you may see
70
again in films like Blade Runner, but in lots of movies where we’ve got these neon
71
colors, 1980s feel. And then over here, we’ve got Retro, which is also 1980s, but could
72
be anywhere from the 50s to 80s. This is kind of a Goonies style shot, and it just has that
73
tone on screen, doesn’t it? You’ll know from movies like The Goonies or perhaps Gremlins
74
or any of these kind of movies that have this kind of tone with this Technicolor that was
75
used. There’s some other things here. Golden Hour, which is obviously what it says right
76
here. We’re getting that golden hue come across blue hue, which sometimes has more of a terminator
77
shot with a blue hue, for example, quite a lot, which has a science fiction kind of feel
78
to it or fantasy quite often. Cyberpunk. That’s that Blade Runner style. It says here ghost
79
in a shell. And even the Matrix use this in more modern times. And then there’s things
80
like Steampunk, which is that mix, that fusion between futuristic and Victorian era. You’ve
81
probably seen. I’ll put some examples on there. Sherlock Holmes use this and Steamboy. Documentary
82
style is realism, that kind of observational feel. If you were to ask this to be generated,
83
you might get a lower image quality intentionally with some bright, punchy colors and expressionist
84
surrealist cinema. And then I’ll finish up with some director styles here. Tarantino,
85
big fan of these low angled shots. And he used some some inspiration from Western as well as
86
Asian cinema. Wes Anderson, that symmetrical, bright, punchy color, real Wes Anderson feel,
87
which is also very similar to Stanley Kubrick. Slightly different, but the psychological movies
88
that he made, Deep Focus and Shallow Depth of Field. Shallow Depth of Field we spoke about in
89
the intro for this, where everything in the background is blurred. You might call it cinematic
90
with your iPhone that you film with. Deep Focus is the complete opposite, where everything is
91
almost in focus. This is in focus back here. It’s not all the way here, but here and here.
92
Everything is in focus. Films like Citizen Kane, one of the first films to be trying to do this,
93
keeping that really deep depth of field that everything is in shot. It almost feels theatrical,
94
where a viewer would have to be looking around the shot themselves. It’s not dictated. Right here,
95
you’re dictated. Look at this flower. You can’t see anything else here. It’s up to you what it
96
is that you look at. Really cool cinematic styles. And of course, fisheye is that wide angle,
97
which you could have done. Tarantino could have been using this if he wanted to. But that kind
98
of eerie visual that I really like to use here. So why don’t we play with some of these? Okay,
99
let’s run a few examples. You can copy these. Let’s actually let’s do that. So for Steampunk,
100
let’s actually copy this right here. And let’s put this into mid journey. Let’s paste that. I’m going
101
to go along and do some of these. So this is for documentary. Let’s use that one here. Expressionist,
102
let’s play with that. I want to see what mid journey comes up with for this. I really like
103
the cyberpunk stuff. So let’s generate that with cyberpunk. Let’s put that in, paste those. And
104
let me try a few more things. So I want to have this retro 1950s feel. Let’s just put in here,
105
1980s. So it’s retro 1980s. cinema style. A young boy is shocked. Looking at camera,
106
eating a candy bar in the style of the movie. The Goonies. Okay, let’s just correct some of
107
these typos. Not that it matters too much as if AI they know you’re spelling mistakes. Let’s put
108
that in. And then let’s look back on some of these. Oh, I want to actually do some melodrama
109
style. So in the style of Douglas Sirk, in 1950s melodrama, generate a cinematic scene inside a
110
house of a woman drinking wine. I haven’t given any direction about the woman and what she is
111
wearing. Okay, let’s put that in. Let’s see some of these results right here. And what we have. So
112
this is the steampunk style imagery. Yeah, really nice. No, we have that Victorian
113
modern a flying boat kind of Yeah, flying boat we have here that looks a bit like one of those
114
inflatables you had during the wars too. Okay, let’s Yeah, really, really nice. So you can see
115
that just by putting in that it’s steampunk. It knows instantly the style that I have to have for
116
here. Let’s look at this documentary style market. So we’ve got these punchy colors, especially this
117
one right here. This looks like it could have been filmed on camera that you would use out in
118
documentary making maybe the Sony F 16 or something. But it’s picking up real color and
119
real vibrance kind of that real thing feel that real feel for documentary. So this is the surrealism
120
that I was speaking about a surreal scene, dark twisted cityscape distorted angular buildings,
121
long looming shadows, lighting should be stark, high contrast. And that is exactly what we’ve got.
122
Look at that. That looks really nice. Okay, exactly what I would want from the prompt.
123
Now we’ve got a cyberpunk generate a bustling rain soaked city at night towering skyscrapers
124
adorned with holographic advertisements. The streets are illuminated by neon signs busy.
125
Okay, let’s have a look at this. Yeah, exactly. We’ve got that Tokyo kind of downtown New York
126
feel. Perfect. That’s exactly what I want from that cyberpunk feel. So this was 1980s cinema style.
127
This one is less so I think this looks more modern 90s early 2000s. This one also I think
128
maybe not. But this Okay, this one best late 80s early 90s cinema style young boy shocked
129
eating a candy bar. What I might want to do is rerun this again. generate it I might just remix
130
it strong. Let’s just see. I’ll explain how to use this in upcoming lectures. But I want to see
131
another version of this image. So that was the 1980s retro feel nice. Now let’s continue to
132
search on here. Here we have the Douglas Sirk star melodrama. So you got this light coming in here,
133
this punchy bit of like turquoise coming in here, a woman drinking wine and a house. Yeah,
134
this is just like a Douglas Sirk on the examples that we’ve shown. Really nice. Exactly, exactly
135
1950s melodrama and understood what I meant because I gave it the examples of Douglas Sirk,
136
who is a director and the word melodrama. Let’s continue to look at some of these shots. This is
137
me just regenerating that shot here. I still think these are a bit modern. I think this one
138
is probably best and you can keep regenerating regenerating and regenerating. For example,
139
this child here has too many fingers. So I would be playing with this and I would tell it to
140
fix the hand and make it only have five fingers, but we’ll get into all of that shortly. So I would
141
now go ahead and check out the site here. Familiarize yourself with different styles.
142
And if you are, of course, if you are wanting something different that you’ve seen online,
143
people like stranger things. That’s a real mix of this 1980s aesthetic,
144
retro and some modern day sci-fi. Also start watching things and generating kind of a list
145
for yourself for the different styles that you want and how you could be generating the perfect
146
images that you need to understand the styles to get the best from your prompts. Now, we have a
147
task coming up at the end of this section. But in the meantime, if you want to go ahead and
148
familiarize yourself with different stars and different directors, there are loads of sources
149
and I’ll link some and show you some. You can go away and start working out probably coming from
150
this from a from a student point of view. Well, I understand what I want it to look like in my mind.
151
But who is that director that did this? What is that style called? And you can go and create your
152
own personalized list for the different styles that you like. Now, of course, AI is limited on
153
certain things it could do. You saw me there generate a child with an extra finger. Just
154
one example of some of the struggles that AI currently has that’s being improved on all the
155
time. So there are some limitations and things you can do about that regarding AI that we talk about
156
in the next lecture.
=== Editing ===
— Limitations When Creating AI Video (and how to work around them) —
1
So AI, AI video image is incredible.
2
It’s amazing.
3
But yes, right now, it does have some
4
limitations, not as many as you probably think.
5
And there are ways around this.
6
But there are some limitations currently to AI
7
video.
8
There’s constantly constantly changing, of course, every week,
9
month, there are updates, there are new tools,
10
updates to existing tools, and things are getting
11
better at a rapid, rapid pace.
12
But right now, there are some limitations.
13
Sometimes when you generate images or videos, but
14
you could rerun these and get around those
15
limitations.
16
Or there are ways to get around it
17
all together.
18
I’ll explain what I mean.
19
Now AI limitations.
20
Yes, of course, there are limitations of AI.
21
It’s not as good in every scenario as
22
if you were recording like a traditional film.
23
Obviously, we’re not there yet, but pretty close
24
with a lot of tools.
25
Now, this is an update.
26
I’ve previously done this lecture before about a
27
year ago.
28
And we’ve come a long way since then
29
in this old lecture, which is this one
30
here, I’ve just got a few screens up
31
here for you.
32
I’m going to show you some examples.
33
I talked about a lot of limitations that
34
were there with things like morphing and movement.
35
And if you were using text and things,
36
and I had a slide up here showing
37
some different ways you can get around some
38
of these limitations.
39
But now a year on, a lot of
40
these things don’t actually exist.
41
We’ve moved on a long way with AI.
42
There were lots of things before, like it
43
would often have six fingers when you generated
44
someone or it would look very unreal.
45
Suddenly, the front of a head would morph
46
into the back of a head.
47
I had an example here on this old
48
slide here, front of the head became the
49
back of the head and I redid it.
50
But we’ve come a long way in the
51
last year with AI.
52
And a I’m using a tool like, for
53
example, VO3 by Google, it’s very, very realistic.
54
And then Runway now, sure, perhaps slightly less
55
realistic in movement than VO3 perhaps, not by
56
much though at all.
57
And I’ll show you some examples, VO3.
58
And then there’s also Mid Journey, which now
59
turns into video, which you can get some
60
very realistic, perhaps sometimes some morphing and things.
61
Now I’ll show you some examples and workarounds
62
for things like this.
63
So here’s a movie that I made with
64
VO3, it’s like a 30 minute movie that
65
I used.
66
I actually used text to video for this.
67
So you can see right here, they are
68
very realistic in movement.
69
Let me show you perhaps in the running
70
scene, not so much.
71
Let me play a little bit of that.
72
But you can see that it’s actually very
73
realistic in people’s movement.
74
There might be some morphing on hands here
75
when they open letters.
76
But look, that looks photographic real, that looks
77
really, really nice, really nice.
78
So you work around what’s available.
79
In this movie, I had it voiceover, so
80
that when these weren’t 100%, people were listening
81
still.
82
And then I had short scenes where people
83
were talking like this one right here, when
84
I had him talking to his friend in
85
the prison, that scene right there.
86
And it’s realistic, perhaps in the movement, it’s
87
not quite 100% sharp, but there’s some
88
forgiveness allowed, I guess inside.
89
We were using AI.
90
So here’s an example.
91
This is from a runway.
92
So you can see it hovers slightly there,
93
but then when she turns, that’s good.
94
But you can see that’s not 100%
95
real.
96
Not to say that that’s always the case
97
with Mid Journey.
98
No, actually, this was a runway.
99
And then if I compare that to, here’s
100
from VO3, this ASMR video, look how real
101
this looks and the movement.
102
This was a text prompt, text to video.
103
So you can see that’s very realistic.
104
Perhaps if you really want to critique and
105
come into some of these whites here and
106
say it looks slightly too polished, bit soft,
107
but it could easily be mistaken for real
108
quality.
109
But that’s probably not what you’re going to
110
go for.
111
If you’re making video with AI, then you’re
112
using it as another tool, not necessarily 100
113
% real.
114
Or if using small clips, then it could
115
be mistaken for that.
116
Not to trick the audience, but to use
117
it as a different tool.
118
Here if I have a look at this
119
person running right at the start, you can
120
see there was a bit of tiny morph
121
in there and in face as she ran
122
up the screen.
123
This is also VO3.
124
But you could see that’s really realistic too.
125
Just that first initial part right there.
126
So those are some limitations with AI.
127
We have gone past perhaps morphing where if
128
I show some examples, you see they also
129
called this text.
130
Text used to be really bad in AI.
131
It’s not 100% perfect, but it is
132
getting much, much better, especially actually if you
133
use mid journey and then turn those mid
134
journey images into video.
135
And then some of these old ones with
136
a car used to move not so well.
137
That’s all changed, all different now.
138
So when you’re getting limitations, I wouldn’t try
139
and strive for absolute 100% realism unless
140
you’re doing very small, short shots that are
141
put together, then it can be.
142
There are things like in VO3 when I
143
am using this with flow and I want
144
speaking, you actually you’ll see me use this
145
later in section 10.
146
You’ll see me go, I want them to
147
say XYZ.
148
And inside one tool, you’ll have the person
149
you’ve prompted for speaking will actually be able
150
to speak.
151
Now before, if you’re using runway or something,
152
you have to go to a separate tool
153
and go, I want it to lip sync,
154
which will be a separate thing and it’s
155
never perfect.
156
So I think flow right now, VO3 is
157
the best for lip syncing.
158
It automatically does it for you, but sometimes
159
the voice sounds slightly computerized or somewhat.
160
There’s a hack around that.
161
You could then take that clip, put the
162
audio into something like 11 labs, and then
163
it can clean the audio for you.
164
You’ll redo it with voice changer and then
165
use that.
166
That’s a little hack work around.
167
We’ll talk about that later.
168
And then inside mid journey, these are really
169
nice, especially if they have a social media,
170
people don’t expect them.
171
You can tell with that walk, it’s not
172
100% real and accurate, but there’s some
173
forgiveness.
174
So when you’re creating with AI, just make
175
sure that you are working around your limitations.
176
So when I made that mini movie, I
177
had a voiceover talking through the whole way.
178
I say it’s made of AI, show tools
179
it’s made of, I’m not trying to hide
180
it.
181
And it’s used to your advantage almost like
182
for recreation.
183
That’s what I would work around, work around
184
AI.
185
If it does generate something where somebody waves
186
and it’s really awkward in the tool that
187
you’re using, depending which tool you’re using, then
188
just regenerate and regenerate and you will get
189
there somewhat.
190
So limitations, work around them.
191
No, you cannot go if anyone asks some
192
questions, how do I just say, hey, make
193
a movie about this and there’ll be a
194
flawless movie start to finish.
195
You’ll have seen the last section when I
196
showed you in video and things.
197
That’s not what we’re doing here on this
198
course.
199
And that’s not really what’s possible.
200
We’re creating individual scenes and putting them together
201
to make a movie.
202
That’s it.
203
Okay.
204
So limitations over.
205
You can look at this old one to
206
see what it was like a year ago.
207
If you want to, with limitations in AI,
208
that’s very much different now.
209
And let’s move on to the rest of
210
the course.
=== Upscaling ===
— The Future of AI Video: Why You Should Act Now —
1
This is a really exciting time and there’s very few times in your life probably where
2
we’re going to be at the forefront of something kind of early on. Every now and again something
3
comes along that’s going to change industries and AI has definitely been that. The objective
4
of this lesson, just quickly, it won’t be a long one, just to show you where AI has
5
been, where it’s going in the future and why you need to be part of it and what’s going
6
to happen. It’s really exciting. I get to geek out about AI with you just for a few
7
minutes or so. Now, if I put up this slide on screen here, you can see that there’s been
8
some really, really good and advanced movements. This is a table, a chart, which there are
9
loads of if you go and search the history of AI and you can see all the movement that’s
10
happened and the biggest movement happening right now in the 2000s, 2010s, is the growth
11
of deep learning. So AI has basically been teaching itself and learning and where we’re
12
regards with regards image and video, learning about movement, video content types and styles.
13
Now we’ve got enough information that we’re able to create these amazing videos and they’re
14
only going to get better and better and better. Now the future is going to be very exciting.
15
And I mentioned it briefly in another lecture. Eventually we’re going to get to the point
16
where we’re not going to be able to recognize AI content from real filmed content. Therefore,
17
why would you or need to create anything using a camera and video anymore? You won’t have
18
to. We’ll be able to create everything from sat behind a desk. So that’s going to be everything
19
from videos are going to some of the stuff on the site that I have here for future careers
20
and things on why you should be at the forefront of this and why it’s exciting. But we’re going
21
to be able to create whole movies, not just adverts and scenes like we do now, whole movies
22
and they’re going to get easier and easier to the point you’re going to be able to go
23
make me a movie that I’ll like and AI will even know your reference of what you like
24
or you’ll tell it a horror movie, an action movie starring this person like this, this
25
length with this kind of thing that happens, and you’ll be able to create personalized
26
content. The future is not going to be a movies created by external people. And then you put
27
out there, you’re going to create movies for you, just for you, you’re going to create
28
content that you want to watch that is created individually. And then you can probably share
29
these, you can almost be your own movie creator and director and share them to a point there’ll
30
be so much content out there that people won’t be looking at watching the next stranger thing
31
they’ll go make me a series like stranger things and it’ll be created automatically.
32
That is really exciting. So there’s obvious reasons about why you should learn this now
33
empowering you and giving you these tools now, while we’re in the relatively early stages
34
of AI, given that graph, and you can see it go up super quickly, we’re going to see that
35
inflection just keep going and going. Get in front of AI. Now let me show you on the
36
site here are some things that are really exciting about you and AI. So on site here
37
on AI video dot school, I have at the bottom here some FAQs and something that kept coming
38
up was someone going what opportunities await after completing the course now it’s in completing
39
this course, but just learning AI and AI video in general. So there were lots of careers
40
here that can be coming and are growing and are currently growing, for example, marketing
41
and advertising, creating cutting edge AI generated videos for brands, you don’t need
42
to and you’ve probably seen this on YouTube, go out and film something you can create a
43
15 second 30 second one minute advert using AI only, or you’ve seen people use AI to dub
44
and re lip sync so you could create one advert and it could go out globally across the world.
45
Really exciting social media content. I look at and I can show you here on YouTube, I watch
46
so many AI generated channels. Some people get frustrated by these and you are meant
47
to and I do mark these on YouTube as AI generated, but I’m watching these comedy videos, these
48
concept trailers for movies that don’t exist that I want to see, or spoof comedy, even
49
AI music videos. So music visualizers that I’ve been creating myself and also watch a
50
lot of great. These are all out there, social media content, not to mention you can make
51
your own individual projects, content channels and film festivals, you can create your own
52
videos for festival to showcase to the world what it is you want to do. This could be everything
53
from a fiction piece with characters you’re creating to tell a story, a series, or it
54
could be a documentary. No longer do you actually have to go somewhere. I don’t know, off the
55
top of my head, National Geographic style documentaries showing parts of the world.
56
You can create that all with AI and make documentaries about things. So good. And of
57
course there’s corporate training videos and e-learning like you’re watching now. I didn’t
58
even have to be on screen here. I could have had AI and you’ve seen just briefly AI generate
59
all the scripts using chat GPT. We do that in the next section, generate a person, lip
60
sync and speak this, generate the voice and teach this training tools that don’t actually
61
need to be filmed anymore. So now is the time to learn AI. I’m so excited to be doing
62
this and to have you here at the forefront and to let you know and see where AI is going
63
to go. And when we get there, when we get to those crazy heights I’ve explained earlier
64
in this video, what’s going to happen after that? It’s a very exciting time. But with
65
that in mind, there are some ethical and legal implications that AI with AI that need to
66
be considered. And let’s talk about those in the next lecture.
=== Conclusion ===
— AI Video Ethics: Creating Responsibly and Legally —
1
Now I just want to talk to you about law and ethics with AI and just to start this off
2
with I am not and this is in no way legal advice. I’m not qualified for that you should
3
be checking out and I will mention this site and some others checking out laws and it also
4
depends on your location in the world. They will differ but I’m going to talk about law
5
and ethics. One is changing the law constantly and you should be checking out for those depending
6
where you’re located. Ethics I think can pretty much stand alone and stand the test of time.
7
Let me get up a slide here. So with this now obviously other locations people are watching
8
this course from I’ve just got the EU and United States up here and just the generalization
9
here right now the EU regulates AI video under comprehensive binding framework AI Act digital
10
services act that requires transparency i.e. the labeling of AI generated or manipulated
11
videos when you upload with YouTube you mark this off on YouTube as AI generated for example
12
risk controls and also platform responsibility the role of the platform to abide by the law
13
it’s in based on where you are accessing that tool. So the approach is precautionary and
14
provider focus with significant fines for non-compliance. The United States on the other
15
hand doesn’t have a single AI law regulated state by state I believe and relying on sector
16
specific federal rules state laws and enforcement after harm occurs i.e. deepfake or non-consensual
17
content laws. The approach prioritizes innovation and flexibility sure at the expense of this
18
is not preemptive it’s reactive when something happens to react and then perhaps laws will
19
be made after and from that. There is a site right here that I’ve linked for you let me
20
just get that up on screen. So this is a pretty good site OECD.ai and then there’s the English
21
version of this and you’ll be able to search here for policies and issues and you can search
22
via location where you are and it tries to keep updated with all different policies that
23
are happening around the world but also the best way I think is to use our friend AI in
24
this and for example tell me what the current laws are for using AI video to create a video
25
with the likeness for example of a celebrity in Germany and you can get details from AI
26
but obviously back this up and make sure you check the sources at the bottom of this to
27
make sure they are up to date. Now I think this only affects if I get the slide back
28
up this really only affects you not creating AI video but a lot of it is about creating
29
manipulative or deepfake style videos. So the ethics and best practice I have for this
30
course at the bottom here only create AI imagery or video using a person’s likeness
31
when you have explicit permission either yourself or a consented individual or an approved AI
32
generated persona. For example you’ll see later if you’re inside Sora and again this
33
may be different if you’re accessing from the EU compared to the US you may see celeb
34
profiles on there like there’s some of Jake Paul I show example of that permission to
35
create some AI videos using some celebs but make sure they are authentic and actually
36
have permission to do so. So for public figures if you are creating video including other
37
people or not even public figures but people you don’t have permission for follow local
38
laws and platform rules. Generally speaking the platforms are abiding by the local laws
39
to where you are unless it’s a rogue platform but the big major ones that we show on the
40
course should be following local rules from where you’re accessing it. Now a way around
41
this for example I have at the end here use AI and I’ve got voted comments here actors
42
instead of realistic replicas make them clearly not the real person and always label the content
43
as AI generated. So for example if you were to make and you’ve seen examples of this you
44
make a movie about someone a public figure then you can create someone just like take
45
a movie like the social network for example about Mark Zuckerberg and they hired an actor
46
to play him is clearly not him and it’s telling the story but it’s unofficial in so much it’s
47
not him or rather than try to recreate and deep fake than actual public person you could
48
create a actors to be telling a story that’s obviously the safer best practice for this
49
as opposed to trying to deep fake someone without permission which we do not recommend
50
on this course for parts of the ethics best practices. But the laws are forever changing
51
based on your location. So do update this and I know I’ve had examples where students
52
have access some tools and not been able to upload a real image of a person and then
53
later this has changed or gone back depending whether they’re in the EU if they’re in the
54
US when it’s going to be the same on other continents around the world also obviously
55
and it will change over time. But this is only really for those trying to create videos
56
where we are using the likeness of someone else if you’re just creating videos about
57
a character you’ve created i.e. create an AI character an AI person and create a story
58
around that then there’s not much here unless you are manipulating a situation didn’t happen
59
then of course as long as your market is AI generated it clearly is and you’re not trying
60
to pull the wool over anyone’s eyes you should be fine but do stay up to date with this please
61
local laws but ethics do not copy anyone’s likeness you do not have permission for but
62
most of you creating video on here want to create stories or perhaps you’re doing commercial
63
type content that kind of thing shorts then this won’t apply to you but just to let you
64
know best practices for ethics for this. Okay let’s get on I’ll see you in the next lecture.
— Check Your Understanding: AI Fundamentals Quiz —
1
So, I’m going to end a lot of these sections with tasks or tests. I think this is probably
2
the only one that has a test as much because we’ve given a lot of information with the
3
fundamentals. So, there are 15 questions or so. I’m going to put them on screen on this
4
video and just hold them for a minute per slide so you can pause the screen, read them
5
and go over them if you want to just stay on this lecture. You can also download them,
6
take the test other ways on this course, that’s no problem. So, I’ll put them up here.
7
Just a word of advice, not all of the answers to these questions have been covered directly
8
inside the fundamentals section. A general overview and you’ll understand the question
9
now fully, but some of them will take a slightly deeper dive needed and external reading by
10
you just to make sure you have a full understanding of AI. Sorry, I know, worst teacher ever.
11
I hate it when teachers did that. Questioning you on stuff I didn’t directly cover. You’ll
12
get what I mean. Okay, let me put the test up on screen and I’ll see you over in the next section.
13
Okay, let me put the test up on screen. Okay, let me put the test up on screen. Okay, let
14
me put the test up on screen. Okay, let me put the test up on screen. Okay, let me put
15
the test up on screen. Okay, let me put the test up on screen. Okay, let me put the test
16
up on screen. Okay, let me put the test up on screen. Okay, let me put the test up on
17
screen. Okay, let me put the test up on screen. Okay, let me put the test up on screen. Okay,
18
let me put the test up on screen. Okay.
— The Complete AI Video Production Process Explained —
1
Section three, now we are moving on to
2
look at workflows, or you might call this
3
a production process.
4
Now, what exactly is the production process for
5
an AI video?
6
You’re probably asking yourself, how do I go
7
from idea to then a complete video?
8
Do I just use text and then get
9
some videos and put them together in a
10
long timeline?
11
Do I do this all in one?
12
When do I get images and make those
13
into videos?
14
What do I do?
15
How do I go from start to finish?
16
And the answer is there’s no right or
17
wrong answer, and it’ll depend on project.
18
But what I’m going to do is show
19
you what we suggest in this course for
20
the best professional workflow.
21
And I’ll also show you some faster workflows
22
and some longer ones, the ones we’re going
23
to use inside this course, which will dictate
24
step by step, each section of this course,
25
what we look at and what we learn.
26
So we’re actually going to make some of
27
these workflows in this section.
28
I’ll just show you quickly how to do
29
each step.
30
So the objective for this section and this
31
lecture is to understand exactly what is a
32
workflow, why you need one, do you need
33
one, what they consist of.
34
So the outcome, you’re going to know what
35
it is that you want to use for
36
your projects, or if you do multiple different
37
types of projects in the future, the potential
38
workflows for those projects.
39
So if I bring up the slide here,
40
just to make it clear, we’re probably semi
41
-familiar with the traditional kind of TV and
42
film process of production.
43
That is, there would normally be kind of
44
a pre-production planning phase.
45
That might be where you come up with
46
your idea, the concept, get the script.
47
Depending on the show, you do budgeting, storyboard,
48
casting, location scouting, art direction, scheduling, pre-visualization,
49
et cetera.
50
From there, you’d move on to production, that’s
51
the filming phase.
52
In some film and TV, they do a
53
principal photography, that’s photographing and filming the area
54
and set to see what’s needed, and directing,
55
camera lighting setup, sound recording, makeup costume, and
56
continuity to make sure there’s consistency throughout the
57
project.
58
Then you move into post-production, which is
59
after you’ve filmed it, the editing phase, editing,
60
visual effects, sound effects, music composition, scoring, color
61
grading, sound mixing, et cetera.
62
Now, this is slightly different, obviously, when you’re
63
creating AI video, because you can create an
64
AI video completely by yourself.
65
Now, the production process I suggest we use
66
in this course, but I’ll show you in
67
the following lectures about how that can change
68
depending on your project, is something like this.
69
We do idea generation, we get an idea,
70
perhaps you already have one, but I’ll show
71
you the tools that we use to generate
72
ideas.
73
From there, we’ll make a script.
74
It may be we use AI to make
75
a fuller script, or a scene script, or
76
just a video structure, if depending what exactly
77
the project is you’re making.
78
From there, we’ll then make the audio, which
79
is slightly different to TV, as you can
80
see.
81
We make the audio first, which is a
82
lot like when companies like Pixar make animation,
83
we start with the voiceover and the score,
84
that’s the music, because the editing and the
85
images that we need will be dictated by
86
the audio.
87
We make the audio first, I’ll show you
88
what tools we use to do that.
89
Now, these next two phases you don’t need,
90
but I’m going to do it in this
91
course, it’s best practice if you’re making a
92
very professional video, is we make a mood
93
board.
94
Using images of AI, we get the feel
95
of what our project’s going to look like.
96
This might be something like principal photography, or
97
something more like storyboarding in the first section
98
that would be in the TV and the
99
film industry.
100
From there, from the mood board, once we
101
know what our general feel of our project’s
102
going to look like, colors, et cetera, then
103
we’re going to look at storyboarding.
104
We’ll take the styles we’ve talked about before,
105
then we take the storyboarding, we’ll generate images
106
that are pretty much, say we take 10,
107
12 images for our scene, and they’re actually
108
probably going to become some of the images
109
we use in our final video.
110
They may be 10 images that break down
111
the different parts of our video that we
112
use as a framework for our next step.
113
Now, as we’ve discussed earlier in the video,
114
there are different ways for us to get
115
video created.
116
We could do text to video, but the
117
best way we feel is to do images
118
to video.
119
You have the best control if you generate
120
images first, manipulate those images, get them looking
121
amazing, and then create videos from those.
122
We’ll make all the images that we need
123
for our project, for our video, then this
124
is kind of up to you.
125
You can up-res those images, you can
126
make those great quality first, then create the
127
video based on those images, or you might
128
want to create the video and then up
129
-res and make the video better quality.
130
We’ll take the images we had, we’ll convert
131
those into videos, we’ll use prompts like we’ve
132
shown to make sure we create the best
133
video that we need, and then in the
134
edit, we’ll put these all together into a
135
timeline to put together our full video, and
136
then we’ll add effects, which probably are more
137
like a color grade, perhaps, depending on if
138
you need it or not on the project,
139
but little sound effects that you’re going to
140
want to put in after, because they’ll be
141
dictated by the actual video themselves, whether a
142
sound effect is needed.
143
So generally speaking, that is a really professional
144
workflow for an AI video, and that’s pretty
145
much the workflow we’re going to use in
146
this course, step by step.
147
In fact, you have access to our site.
148
I’m going to jump into screen and show
149
you that, and I’ll show you how I
150
break that down so you can go away
151
after this lecture and look at that to
152
really understand the tools we’re going to be
153
using and the structure to making a really
154
professional AI video.
155
Let’s jump into screen.
156
So you have access to this, AIvideo.school,
157
AI-video-workflow, and we’ll keep adding to
158
this so it might look slightly different by
159
the time you get to this, but it’s
160
explaining what a workflow is, show you the
161
most effective workflows, exactly what they are, and
162
then this, this section right here is full
163
of information because people don’t know what’s needed
164
for each step.
165
So we break this down, step one, all
166
the way to step 10, and just like
167
I showed on the slide just now, we
168
go from idea generation.
169
In this course, we’re going to concentrate on
170
ChatGPT, Gemini, Perplexity, Copilot, and Claude.
171
We’re going to each of these.
172
That’s the next section of the course.
173
Then we make a script or a structure,
174
and again, we use Squibbler, ChatGPT, Chatsonic, Gemini,
175
TextQuartets, and perhaps some more.
176
This will be continually updated.
177
From there, like we showed in the slide,
178
we go to AI Audio, we make our
179
score and perhaps our voiceover, Suno, Filmora, Udio,
180
and 11Labs, 11Labs is probably the main tool
181
we’re going to be using here.
182
We’ll show you all of those.
183
We’re going to then make our mood board,
184
our design, if you like, conceptualize the style
185
like we spoke about in the last section
186
of the course.
187
We use MidJourney for that, perhaps Gemini, because
188
Gemini is a great tool to give text
189
and image results.
190
Then I’m going to make a storyboard, which
191
you’re going to base some of the first
192
images we might be turning into video later,
193
but it’s a good way to conceptualize the
194
whole story.
195
Again, MidJourney, there’s Storyboarder.ai, which is a
196
really good tool, and Photoshop for this.
197
We can use some GenFill in Photoshop to
198
make our images how we want them to
199
look exactly, but also to place these into
200
a way and a format for us to
201
visualize our video better.
202
From there, we get into the next two
203
big sections here.
204
We’re going to do some AI image generation
205
now, and there’s loads of tools we can
206
use for this, and I could add to
207
it.
208
Here are nine tools we’re going to go
209
over.
210
There’s MidJourney, Runway Image, Dial-E, Photoshop, Grok,
211
that’s with X, used to be Twitter, Stable
212
Diffusion, Adobe Firefly, Gemini, and Meta AI.
213
We’ll take a look at all of these.
214
My favorite pretty much is MidJourney, perhaps Runway
215
Image sometimes, but MidJourney.
216
You may find after using these, some are
217
free, some are paid, that there are some
218
that are better suited to you and your
219
needs, and you’ll get your own favorite.
220
From there, we’ll convert these images into video
221
generation, and again, nine whopping tools right here,
222
some way more suitable for purpose than others,
223
and you can decide after I show you
224
all of these, which one you like the
225
most.
226
Right now, it’s Runway, but soon be Sora.
227
Let’s take a look.
228
I’m going to show you Runway, Pika, Morph
229
Studios, Luma, Dream Machine, Hapier, Kaber, InVideo, Akul,
230
and Sora.
231
Also now, of course, V03, Google V03, which
232
you can use in Flow, like I show
233
in here, or any of the other tools
234
that are using V03 out there.
235
Probably my favorite tool, I think, alongside Runway.
236
I show a lot of Runway in this
237
course, but V03 now for realism is really,
238
really good.
239
Their text to video is incredible.
240
Obviously, you have less consistency with text to
241
video, but their frame is also pretty good.
242
Automatic lip sync audio, everything all in one,
243
so we’re going to cover V03 a lot
244
too.
245
Then from there, we may then improve our
246
image resolution, even on the still images if
247
you want to do it at this phase,
248
or after the video, or both.
249
For that, I’m going to show you Topaz,
250
Filmora, and Morph Studios again for up-resing.
251
Then we’re going to edit.
252
I predominantly use Premiere Pro, but you don’t
253
have to.
254
You could use CapCut, which is free, or
255
Filmora, and you could use any of these
256
tools, but we’ll show you some AI editing
257
features that are in there, and just putting
258
it down onto a timeline to really see
259
our final project.
260
Then we’re going to get our sound effects.
261
We’re going to use 11labs again, predominantly, although
262
I can show you actually how to just
263
use sound effects grabbed from YouTube and downloaded
264
that are free, or you could use some
265
paid tools if you wanted to, to get
266
yourself some stock sound effects, but I’ll show
267
you predominantly 11labs.
268
If you go on over onto this page,
269
you can start going through these.
270
You can even check out some of these
271
tools if you want to.
272
I don’t expect you to know all of
273
these.
274
We’ll conquer these all in the next stages
275
of the course, so the next section is
276
all about ID generation.
277
We’ll go through these one at a time.
278
Then we’ll generate scripts, and we’ll go through
279
these one at a time.
280
Audio, mood board, storyboard, images, et cetera.
281
We’ll go through all of these tools, but
282
to familiarize yourself, you can go through and
283
check this out.
284
There’s an overview of a production process, getting
285
AI video completed, going all the way from
286
start to finish.
287
You don’t have to do it this way.
288
As we go through these tools, you’re going
289
to see which ones you want to use
290
and which ones you don’t, but it’s best
291
I show you them all and show you
292
too much, and you take away the bits
293
you want, and it will also depend on
294
the project that you’re doing.
295
An advert, for example, social media post, is
296
going to be very different than if you’re
297
making a short film for festival or something
298
more elaborate, and I’ll get into that, and
299
I’ll explain the differences in the next lecture.
— Tailoring AI Video Processes to Fit Your Purpose —
1
So there are different needs for different types of workflow depending on the project that you are
2
doing obviously and I just want to use this lecture won’t be very long just to go over some
3
differences in the workflows that will depend and be dictated by the project that you are making we
4
backwards engineer this from project we are making through to workflow needed.
5
I’m going to give you some examples here we can go through on this slide and then that’s it we can
6
get into making and showing you these workflows and the tools that I’m using briefly in this section
7
before we go into the next sections where we get in depth in each step.
8
Quickly the objective for this lecture is to understand what’s needed for different styles of videos
9
you’re creating maybe you’re creating a social media video maybe you’re creating a short film
10
they’ll be different depending on what it is you’re creating by the outcome of this you’ll know the
11
kind of projects you’re going to make and what you want to do so perhaps the potential workflow that
12
you’ll need for that let’s jump into the slide and I can explain.
13
So workflows really are based on projects and there’s a big difference between them and go through
14
some of these let’s start with social media video now this content is of course being consumed
15
mostly predominantly on a mobile phone now a lower quality is accepted someone is watching this
16
perhaps on a 7 inch screen in their hand you don’t need to have this at 468 K quality that needs to
17
be blown up and displayed in a cinema level.
18
So perhaps you can get away with a lower quality which will in turn have a quicker workflow and
19
allow you to produce more content there’ll be less concentration in the workflow on the uprising
20
part of this of course perhaps also on the consistency deep edit you don’t need to go in and really
21
really go to work on this like you would if you’re presenting a short film at festival for example.
22
That’s the difference between a social media video you may want to have a slightly quicker workflow
23
and then the next lecture we’ll talk about a very quick workflow and that might be the one that
24
you’re using not to mention small things like the format you may be doing this 916 having a vertical
25
format as opposed to a landscape now next I want to talk about an advert if you were creating an ad
26
for yourself your own product or for a company perhaps you’ve managed to get a commission from a
27
company to produce a high quality advert for them.
28
These could be a 15 second advert for YouTube for social media could be for TV you’re probably going
29
to want a slightly higher quality because you’re making this for client and also it’s for sales now
30
if you are using an advert that’s for sales you don’t ever want the quality to be so low that
31
someone says that’s not good I’m not going to spend money on that product.
32
So you’ll want higher quality you may concentrate on the consistency on the image to up resin etc
33
not to mention ethics in an advert you don’t want to lie you don’t want to not show the actual
34
product or what the person is getting you may also have less control in the workflow.
35
I showed you there’s a mood board section storyboarding you may be dictated by the ad by the company
36
saying the brand colors are red yellow blue and it needs to be looking in this certain way I’m
37
primarily trying to target young Gen Z female therefore there’s going to be a feel in a certain way
38
as opposed to if I’m targeting an older male demographic.
39
There’s going to be a very different color very different grade very different feel to the video and
40
the branding mood etc is going to be dictated by the ad so when you come to mood boarding or getting
41
together your type of shots you may have less control and you need to really concentrate on what the
42
company is telling you.
43
Now this could shorten the length of your production process you’re being told how it’s meant to
44
look there is less need for you to develop this yourself it could actually make it quite a bit
45
quicker but you probably want to concentrate more on the quality here.
46
Now next I want to talk about a short film or professional style video if you are making a video
47
project that’s going you could distribute this yourself on YouTube or perhaps you’re going on to
48
film festival now quality here is very important if it’s going to film festival it’s probably going
49
to be displayed on a big screen so you’re going to want at least 4k quality minimum I’d expect
50
perhaps even more so you’re probably going to want to spend time up raising your quality your
51
content to make sure it is really good.
52
Make sure there’s consistency make sure if you have one character in one scene it looks like the
53
same character in the next scene or the feel of the video is the same throughout therefore the mood
54
board and style perhaps would be very much needed to be concentrated on.
55
There may also be actually a more in-depth post-production process that’s when you edit this
56
perfectly you get that grade that color grade and that score that sound design is probably out of
57
all of them it’s probably going to be your most in-depth production process and you’re probably
58
going to follow closely along with the one that we mentioned inside.
59
The last lecture if I go back on the screen you can see once again on that site I showed you on the
60
page the AI video workflow and we have 10 steps 123 all the way to 10 you’re probably going to want
61
to follow something like this the most in-depth workflow you can’t go much more in-depth you could I
62
guess spend more time on each step but this is probably the workflow you’re going to want to go with
63
if you’re making a high quality video production.
64
Now it’s also personal preference you may want to avoid some steps or some tools you don’t like
65
aren’t familiar with don’t want to don’t need to don’t want to pay for perhaps there’s no one rule for all.
66
You’re going to understand and learn all of these tools as you go along and you’re going to see me
67
use them in fact in the next lecture I’m going to do a very quick workflow and you’re going to see
68
me use several of these tools it may be that you think to yourself I don’t want to learn that tool
69
that’s not really needed I prefer to use this tool this free tool this better tool and you’re going
70
to start developing your own personal preference to your workflow.
71
So with that in mind let’s move on let me show you the quickest shortest workflow you could probably
72
do for AI video and watch me actually produce this just quickly I’ll whiz through it produce this
73
tool by tool using different AI platforms.
— The Speed Workflow: Fast AI Video Creation —
1
So exciting, let’s get through these five steps
2
as I showed on the slide here.
3
Let’s first, let’s get ourselves an idea, okay?
4
I’m in chat GPT, let’s grab ourselves an
5
idea for a video.
6
I’m going to show you, and we’ll whiz
7
through each one of these tools so you
8
can see what it is you’d need.
9
So if you were here, we’ve already talked
10
about our prompts, and then the next stage
11
we actually talk about scripts in depth, but
12
for the sake of showing you a workflow,
13
let me do this, create a script for
14
a one minute video about a, that’s an
15
introduction to a new superhero, like Marvel.
16
The superhero is called Environmental Man, only
17
have a narrator’s voiceover, no dialogue.
18
Okay let’s run that.
19
Okay the rise of Environmental Man, scene one,
20
dark clouds swirl over polluted city skyline, factories
21
spew smoke, dark murky narrator, in a world
22
where political chokes and skies the earth cries
23
out for a hero, a new force emerges.
24
Scene two, cut to a closeup of a
25
young scientist surrounded by environmental research, holographic screens,
26
he reaches out grabbing a glowing vial of
27
green energy, suddenly a surge of light engulfs
28
him.
29
Once an ordinary scientist dictated to reversing humanity’s
30
damage, he discovered a secret to harnessing the
31
planet’s raw energy.
32
Quick cut showcase his abilities, so we see
33
him gifted with the power of controlling elements,
34
he is a defender of nurture, the guardian
35
of our future.
36
Scene four, closeup of the Environmental Man’s suits,
37
sleek design, okay, when the earth needs a
38
savior, he answers their call, a champion for
39
the planet and its creatures.
40
And then a couple more shots here, so
41
I’ve got six shots, that could be just
42
six images needed, could be a couple more,
43
and I’ve got what the voiceover is happening
44
here.
45
So we have our script, let me just
46
copy that right here, because the next step,
47
like I mentioned, I’m going to copy that,
48
is to create the audio for this.
49
I want to have the audio first, and
50
then I’m going to create my visuals and
51
my video based on that.
52
So let’s head over to 11Labs, you may
53
use another tool for this, this is the
54
one I’m going to be showing you predominantly
55
in this course, it is probably the leader
56
in voiceover and in sound effects, cloning voices,
57
etc, pretty much a leader in the market
58
for this.
59
There are other tools and I’ll show you
60
them later when we get to that stage
61
in a few sections time, but just to
62
go over this quickly, let’s do text to
63
speech, let me just paste this all in
64
here.
65
So I need to remove everything that’s not
66
the narrator, okay, this, okay, so here is
67
all the narrator’s parts I’ve got in here,
68
that’s great, that’s all that’s in here.
69
Let me find myself a voice, I’m going
70
to find more voices, let’s find, I want
71
them to speak English, accent, let’s do American,
72
more filters, I want them to be a
73
male voice, age, let’s do middle-aged, and
74
let’s start listening, middle-aged American voice, good
75
clear narration, okay, so Carter the Mountain King
76
listens to this, I’m going to show you
77
this voice, that’s the voice I want for
78
this, okay, let’s add that into here, let’s
79
go back on text to speech, I’m going
80
to choose that, Carter the Mountain King, let’s
81
generate our speech, exactly
82
what I want, that’s exactly what I need
83
for this, okay, let me download that, I
84
now have my narrator’s voice, that’s downloaded, perfect,
85
now I want to create some images for
86
this, if I go back to my script,
87
I can see the first thing I want
88
here is an establishing shot, so dark clouds
89
swirl over polluted city, okay, great, I can
90
actually probably copy that word for word, I
91
can put this into here, I’ve already got
92
my setting set that I want it to
93
be landscape 16.9, I will show you
94
everything you need in this tool in a
95
few sections time when we talk about images,
96
but for now, just follow along, I’m showing
97
you the workflow, let’s just populate that, let’s
98
see what it comes out with, okay, nice,
99
let’s take a look at these, that one,
100
that one, that one, that one, that’s kind
101
of dramatic, isn’t it, okay, let me use
102
that one, although it’s black and white, let
103
me just see if I can get that
104
into a more color grade, let’s take a
105
look at these, okay, I like this one,
106
let me download that one right there, that’s
107
that shot, let’s go back to here, in
108
a world where pollution jokes, then a close
109
-up of a young scientist surrounded by
110
environmental research holographic screens, yep, let’s put that
111
in there, let’s see what that comes out
112
with, okay, let’s take a look at what
113
they got here, vile, this is nice, okay,
114
I like this shot the most, let’s download
115
that one, go back to here, a lot
116
of light’s gonna engulf him, we do that
117
in the background, okay, let’s take a look
118
at this one, let’s take a look at
119
this one, let’s take a look at this
120
one, let’s take a look at this one,
121
let’s take a look at this one, let’s
122
take a look at this one, let’s take
123
a look at this one, let’s take a
124
look at this one, let’s take a look
125
at this one, let’s take a look at
126
this one, let’s take a look at this
127
one, let’s take a look at this one,
128
let’s take a look at this one, let’s
129
take a look at this one, let’s take
130
a look at this one, let’s take a
131
look at this one, let’s take a look
132
at this one, let’s take a look at
133
this one, let’s take a look
134
at this one, let’s take a look at
135
this one, let’s take a look at this
136
one, let’s take a look at this one,
137
let’s take a look at this one, let’s
138
take a look at this one, let’s take
139
a look at this one, let’s take a
140
look at this one, let’s take a look
141
take a look at this one, let’s take
142
a look at this one, let’s take a
143
look at this one, let’s take a look
144
at this one, let’s take a look at
145
this one, let’s take a look at this
146
one, let’s take a look at this one,
147
let’s take a look at this one, let’s
148
take a look at this one, let’s take
149
a look at this one, let’s take a
150
look at this one, let’s take a look
151
at this one, let’s take a look at
152
this one, let’s take a look at this
153
one, let’s take a look at this one,
154
let’s take a look at this one, let’s
155
take a look at this one, let’s take
156
a look at this one, let’s take a
157
look at this one, let’s take a look
158
at this one, let’s take a look at
159
this one, let’s take a look at this
160
one, let’s take a look at this one,
161
let’s take a look at this one, let’s
162
take a look at this one, let’s take
163
a look at
164
this one, let’s
165
take a look at this one, let’s take
166
a look at this one, let’s take a
167
look at this one, let’s take a look
168
at this one, let’s take a look at
169
this one, let’s take a look at this
170
one, let’s take a look at this one,
171
let’s take a look at this take a
172
look at this one, let’s take a look
173
at this one, let’s take a look at
174
this one, let’s
175
take a look at this one, let’s take
176
a look at this one, let’s take a
177
look at this one, let’s take a look
178
at this one, let’s take a look at
179
this one, let’s take a look let’s take
180
a look at this one, let’s take a
181
look at this one, let’s take a look
182
at this one, let’s take a look at
183
this one, let’s take a look at this
184
one, let’s take a look at this one,
185
slow motion, no movement, sometimes I do this
186
and it’s still going to make some movement,
187
but it’s not, man looks at bio, let’s
188
see our next shot, see if we need
189
to regenerate this again, alright for this example,
190
that is good enough, so I’m going to
191
download that first one, download this one, let’s
192
take a look at our next shots, okay
193
this out of all of them was probably
194
one of the trickiest shots for AI, let’s
195
see if the camera follows this man flying.
196
So I’m actually going to have him probably
197
put that shot into reverse for the sake
198
of this tutorial.
199
I’d probably regenerate that again and we’ll do
200
them later when we come to make our
201
actual project.
202
For this, I’m going to download and use
203
that and you’ll see me reverse it.
204
It’s often something you might see with AI
205
videos that when you have, I don’t know,
206
maybe a car driving down the street in
207
a drone shot that it reverses it.
208
So it’s driving backwards and in the edit,
209
then we just reverse that.
210
So they drive forward.
211
Let’s wait for this final shot and then
212
we’re getting to the last stage of this
213
speed workflow.
214
I haven’t looked at this yet.
215
Let’s see what this generation did.
216
Yeah, exactly what I need for this trailer.
217
Just a little bit of movement.
218
Fine.
219
And then I’d have a bright light.
220
Okay, let’s download that and let’s move on
221
to the next step in the process.
222
So next you want to edit these and
223
compile these and put it together.
224
If you want a free bit of software,
225
then perhaps you want to use something like
226
CapCut.
227
You can go ahead and download this.
228
Look something like this and you can just
229
import your videos and put them on there.
230
I’m a fan of Premiere Pro and that’s
231
what I use, but any editing software, honestly,
232
you don’t need anything particularly special for this.
233
Just whatever you want to use to compile
234
your shots together.
235
So I’m going to use Premiere Pro.
236
All I need to do is drag in
237
all our downloads right here.
238
These are our videos that we just created
239
right there.
240
And then I’ve also got, of course, the
241
voiceover.
242
So let me grab our voiceover that we
243
made, drop that in here.
244
Okey dokey.
245
Let me put that onto here.
246
Zoom this in for you to see.
247
Okay, great.
248
Let me drop that shot in like that.
249
Let me make it the right size.
250
I haven’t up raised any of this.
251
That’s in the later section.
252
So it wasn’t quite the right size I
253
wanted.
254
Let me do that.
255
I can add in a nice fade or
256
something.
257
Let’s search for a dissolve.
258
Yeah, that will do.
259
I might make that a little bit bigger.
260
Okay, so now it looks something like.
261
Great.
262
And now I’m going to put another dissolve
263
in here.
264
Our next shot was of that young guy,
265
the scientist.
266
Let’s find his shot.
267
Let’s drop him in here.
268
Probably also want another little fade at the
269
front.
270
Great.
271
Let’s go on to some of our other
272
shots here.
273
Okay.
274
Discovered the secret to harnessing the planet’s raw
275
energy gifted with the power to control the
276
elements.
277
And let me find that last shot that
278
we can go here.
279
So I said with this one, what often
280
happens is it does it in reverse.
281
That’s fine.
282
I’m just going to reverse this shot, which
283
you can do with any any software.
284
Let’s go speed duration.
285
I’m going to just reverse that shot right
286
here.
287
And now it looks something like that.
288
You see it comes forward.
289
That’s fine.
290
Let’s go to here.
291
So we haven’t got the complete obviously the
292
complete timeline for the sake of the length
293
of this tutorial.
294
So let’s play.
295
Let’s play this and see what this looks
296
like.
297
In a world where pollution chokes the skies
298
and the earth cries out for a hero.
299
A new force emerges.
300
Once an ordinary scientist dedicated to reversing humanity’s
301
damage, he discovered the secret to harnessing the
302
planet’s raw energy gifted with the power to
303
control the elements.
304
He is the defender of nature, the guardian
305
of our future.
306
When the earth needs a safe.
307
OK, perfect.
308
So you can see how this is coming
309
together, how we’re making this.
310
We could do another step.
311
I’m going to show you something else.
312
So this step I didn’t put into the
313
speed, but I feel like this project we’re
314
just making needs it.
315
Let me do a superhero Marvel sound track
316
for a trailer.
317
Epic.
318
That’s probably all I need to put for
319
this.
320
I want to be instrumental.
321
Create.
322
This is Suno that can create you soundtracks.
323
You could have lyrics.
324
We’ll get into this when we do the
325
audio in a couple of sections time.
326
I’ll show you this tool.
327
It’s really one of my one of my
328
favorite.
329
Oh, that’s nice.
330
Yeah, exactly what I want.
331
OK, I’m going to download this right here
332
when it’s ready, download the audio and add
333
it onto my timeline.
334
Back to our project.
335
Let me just chuck in our soundtrack we
336
had here.
337
Project right here.
338
Let’s drop that on.
339
OK, let’s play this now.
340
Nice.
341
Let’s just play that through with the voiceover.
342
In a world where pollution chokes the skies
343
and the earth cries out for a hero.
344
A new force emerges.
345
Once an ordinary scientist dedicated to reversing humanity’s
346
damage, he discovered the secret to harnessing the
347
planet’s raw energy.
348
Gifted with the power to control the elements.
349
He is the defender of nature, the guardian
350
of our future.
351
When the earth needs a say.
352
Amazing.
353
So you can see that just in about
354
I think I’ve been working on this for
355
20 minutes or so on this speed process
356
right here.
357
We’ve gone from.
358
Let me show you.
359
We’ve gone from idea.
360
We had no idea.
361
Then we did a voiceover for it and
362
did a narration.
363
We generated the images that we needed.
364
We then made the videos out of those.
365
We even got a soundtrack for it.
366
And then we added it all together to
367
make ourselves a project.
368
All I would do here if I zoom
369
out, I can see that I need narration
370
finishes around here.
371
That’s like a 42 second trailer.
372
I need maybe several more shots, another five
373
or so that I would do in exactly
374
the same way.
375
Mid-journey, create the images, tweak these.
376
I would play with it and I would
377
probably add some sound effects and things.
378
But look at that.
379
In 20 minutes, we’ve done a speed workflow
380
and we’ve generated this.
381
Within an hour, you’ll have yourself the trailer
382
that you needed for this.
383
That was the speed workflow.
384
So the next lecture, the next stage, I’m
385
going to go a little bit more in
386
depth with this.
387
And we’ve got a few more steps to
388
go along here.
389
Let me show you in the very next
390
lecture.
— Step-by-Step: The Comprehensive AI Video Process —
1
Now this workflow I’m going to show you
2
in this lecture compared to the last one
3
is what we want to call the best
4
workflow.
5
Now that’s because it pretty much goes through
6
every stage of the workflow that you could
7
probably want for an AI video.
8
There are some still some examples here where
9
you could skip a step or you perhaps
10
you want to use more tools for each
11
step just one tool per step.
12
I’m going to show you kind of an
13
in-depth workflow.
14
We’re going to jump into the screen in
15
a moment and we’re going to create an
16
AI video from scratch from idea generation all
17
the way through to completion to a very
18
high quality using every stage in the workflow
19
that I’ve mentioned.
20
So the objective for this lesson is to
21
see what a long if you like the
22
longest version of the best workflow would be.
23
So in the outcome you’re going to know
24
what these tools are for each stage just
25
an overview at this point and probably what
26
you’re going to need for your projects and
27
what it is that you need what steps
28
and what steps you don’t.
29
So let’s jump in here and I’m going
30
to go through every single stage let’s go
31
through them all let’s create an AI video
32
together and you can watch and come along
33
and follow me step by step.
34
Let’s go.
35
So if I just bring up the slide
36
here to remind us this is what we’re
37
going to follow pretty much for the course
38
and it’s each stage of the workflow here.
39
So we’re going to start with idea generation
40
for that that could be and this could
41
be different or more tools I’ll show you
42
more in the following sections chat gpt gemini
43
the same for scripting or getting a structure
44
then for audio you could use 11 labs
45
or suno and then for your mood boards
46
and also your storyboards that’s mid journey gemini
47
photoshop also for your images mid journey there
48
are loads and I’ll show you a few
49
more on here perhaps you want to up
50
res and I’m going to show you topaz
51
but there’s other other platforms and tools like
52
morph studios and things I’ll show you later
53
in the course then we’re going to use
54
a host of AI tools to make our
55
video that’s runway which is primarily the the
56
tool that I like to use there’s pica
57
luma dream machine hapier and many more and
58
then there’s up resing also after that and
59
then we can edit these put these together
60
premiere pro cap cut whatever you want to
61
and then some sound effects again with 11
62
labs to finish this so it’s not an
63
exhaustive list and you can check on our
64
site once again if I show you this
65
was the workflow explained and these are all
66
tools that I’m going to teach inside the
67
course for each stage and what you could
68
be using use all of these some of
69
these none of these is completely up to
70
you I’m just going to show you everything
71
so let’s start this process let’s make an
72
AI video from scratch and the first thing
73
I want to do is generate an idea
74
for that I’m going to use chat gpt
75
and also Gemini I like to use both
76
of these tools simultaneously and there are loads
77
more you could be using here any text
78
base we’re going to in the next section
79
the very next section is idea generation I’ll
80
show you all the tools we want to
81
use for that which of course are listed
82
right here either generation we use chat gpt
83
gemini perplexity copilot and claude but for today
84
I want to do this so let’s generate
85
me some short film ideas two minute films
86
to create with AI video that is so
87
general I’ve given it no instructions whatsoever okay
88
so let me just copy both of these
89
and I’m just going to show you what
90
these generate for results both here and in
91
Gemini so chat gpt says so the time
92
capture a child in the 1980s but there’s
93
a time capture in the backyard 40 years
94
later discovered by an adult okay digital ghost
95
a young programmer rewind a character finds a
96
device the forgotten note AI dream the last
97
message a character seen the switch in a
98
near future society character discovers a hidden switch
99
okay nature’s revenge so this is giving me
100
AI themes within this because I used AI
101
in the prompt the perfect loop in the
102
blink of an eye let’s see what Gemini
103
does here here are some short forms ideas
104
sci-fi digital doppelganger a person discovers their
105
digital doppelganger okay quantum leap scientists accidentally I
106
like the idea of sci-fi for generating
107
good images the last humans on earth interacts
108
with advanced AI okay fantasy enchanted forest dream
109
weaver wishing well surveillance state glitch so I’m
110
getting loads and loads of comedy ones here
111
I like the way that Gemini divides these
112
down by kind of niche or genre Gemini
113
you’ll find is really good at this and
114
even I could take one of these and
115
run it again and it’ll give me image
116
examples and things Gemini is very good at
117
doing this and dividing it down into a
118
really easy manageable way to receive information so
119
I like the idea of this sci-fi
120
I like the last human something like this
121
perhaps not interacts with advanced so I’m going
122
to do generate me a video idea two
123
minutes for a sci-fi film about a
124
what they say the last human I like
125
the idea of just a single character about
126
a young girl on a futuristic planet who
127
at aged 14 realizes she is developing superpowers
128
okay let’s run that let me just copy
129
it I’m going to put it also into
130
chat gpt because it’s compare these side by
131
side so now I’ve given it some more
132
of an idea you could have gone with
133
any of those ideas you could have your
134
own one and want to get ideas around
135
this so this one’s giving me the title
136
the awakening a distant futuristic planet advanced technology
137
floating city surrounded by vast alien plot summary
138
a young girl mirror sitting alone on a
139
rooftop futuristic home she’s 14 years old gazing
140
at a purple sky holographic devices floating vehicles
141
and incidents she feels out of place in
142
here on a planet ethereal and a citizen
143
role play marriage is always left like okay
144
then is inciting incident discovers her superpowers all
145
right let’s see what Gemini come up with
146
so this gave me the awakening logline a
147
young girl living on a futuristic colonized planet
148
begins to manifest extraordinary abilities forcing her to
149
confront a hidden destiny and dangers that come
150
within I like this establish a futuristic alien
151
setting the awakening a sudden inexplainable okay so
152
this is nice I do like both of
153
these so let me use chat gpt let’s
154
now this is the idea that I want
155
okay generating ideas spend a lot of time
156
generating the ideas you want go forward come
157
back back and forward back and forward back
158
and forward so that you get something that
159
you want you saw that first prompt I
160
gave was anything I gave it any I
161
gave a command to come up with ideas
162
you may have something that you’d like something
163
you want to make about create about most
164
of you will and then start working back
165
and forward chat gpt is a great one
166
to work ideas back and forward so I
167
could say um generate me a script now
168
we’re moving on to generating a script that’s
169
the second if I bring up the slide
170
the second stage right here I want to
171
generate me a script for a one minute
172
trailer for the film mentioned above there is
173
only a narrator not dialogue from characters I’m
174
going to do that because I don’t like
175
using lip sync although I can show you
176
in here how to do lip sync actually
177
maybe I’ll do that generate me a script
178
for one minute trailer for the film mentioned
179
above there is only narrator not dialogue narrator
180
and just one line of short dialogue from
181
the main character okay let’s see what this
182
does now because chat gpt is this kind
183
of back and forward model I can say
184
hey you know that one you just did
185
there I don’t have to re-explain the
186
whole thing again do me create me a
187
script based on and then describe the movie
188
I can just say hey make me a
189
script based on this amazing so title awakening
190
one minute trailer script a shot of a
191
wide open futuristic planet at dusk two moons
192
hover over the sky cast an ethereal glow
193
over the sprawling cityscape floating vehicles and towering
194
structures narrator on a distant planet of Ethereon
195
where everyone’s secret is buried deep one is
196
about to surface quick cuts of young girl
197
mirror looking at her hands as objects around
198
her float around the room and wobble her
199
eyes faintly glow mysterious energy great she always
200
felt different until one moment changed everything cut
201
to an aerial view of mirror standing on
202
a rooftop swirling around her lights flicker and
203
city glows fast-paced scenes on screen unintentionally
204
shattering narrator a power she never asked for
205
but her father standing here secrets catered in
206
okay flashbacks and then the only line she
207
says here so it’s the only one I
208
need lip sync for what am I camera
209
zooms in okay let’s create some of this
210
that’s great that’s exactly what I want so
211
this is my script I have right here
212
I’m going to copy and paste that just
213
like we was doing in the fast workflow
214
we did in the last lecture I’m going
215
to do exactly the same thing here the
216
next stage you could obviously work that back
217
and forward back and forward and I would
218
spend time hours a day whatever you want
219
working your script perfect for the sake of
220
this tutorial and showing you this I’m going
221
to move on and go to the next
222
stage right now I want to go to
223
11 labs if I show you the slide
224
we’re going to start working on our audio
225
for this what I want is the narrator
226
now I don’t need sound effects for this
227
you can imagine lots of sound effects and
228
music etc we do that at the end
229
of the video let me just go here
230
and come back onto 11 labs so this
231
is what we had for the last lecture
232
I’m going to erase that here I’m going
233
to paste in my script and then I’m
234
just going to take out everything that isn’t
235
the narrator’s speech okay here we are this
236
is my all the speech that I want
237
the narrator now we used a really great
238
voice last time for trailers and if I
239
just generate that I can show you what
240
that sounds like on the distant planet of
241
Arathon where every secret is buried deep one
242
is about to surface she always felt different
243
until one moment changed everything a power she
244
never asked for so that’s a really nice
245
voice let me see if I can find
246
I would quite like to let me find
247
some more of this I’d quite like to
248
find a female voice for this because the
249
story is about a female perhaps I can
250
find something if I find an old woman
251
perhaps there’s like a really nice sounding voice
252
we can do no great things only whether
253
you think you can or you think you
254
can’t to bring anything into your life and
255
then the sun rises in the east and
256
sets in the west no garden is without
257
its weeds oh I think I found the
258
voice I want to go of actually here
259
this listen to Jessie’s voice right here let
260
me show you Jessica well well look what
261
we have here oh kind of ominous okay
262
let’s generate the speech with that on the
263
distant planet of Arathon where every secret is
264
buried deep one is about to surface she
265
always felt different nice that’s exactly the kind
266
of voice I want let’s download that and
267
keep that for later now we have that
268
that was that stage let’s go on and
269
start generating ourselves kind of a mood board
270
for this I want to have a certain
271
kind of look I’m going to go back
272
quickly to here to chat gpt I’m going
273
to go to this opening shot and it’s
274
a good way to kind of get myself
275
a good feel for this I’m going to
276
go to mid journey again show you this
277
later I’m just going to put this in
278
here without giving any style instructions then I’m
279
going to go back on here and let’s
280
go sci-fi dark film noir cyber punk
281
we spoke about these earlier and I am
282
going to just a wide shot of a
283
futuristic planet at dusk moons hover over the
284
sky let’s play with that okay let’s see
285
what they generated us here these are quite
286
nice this one and not that one so
287
much for that one okay let’s see what
288
happens when I gave it some style guides
289
oh nice look at this okay that one
290
that that oh I like I do like
291
this one too okay I’m going to use
292
this I like this hue here let’s download
293
that let’s keep that and then another thing
294
I always try to get is young girl
295
on this around her okay I’m just going
296
to copy this and I’m not going to
297
actually use that just to remind myself so
298
young girl mirror she is 14 years old
299
she is sat on the roof of a
300
futuristic skyscraper let’s correct these on the on
301
all right let’s give it this without any
302
instructions on what she’s wearing or clothing let’s
303
see what they generate I haven’t described the
304
girl at all I just said 14 years
305
old I kind of want to see what
306
it comes up with then let’s do this
307
again and let’s do it with a pink
308
hair and an innocent look futuristic generate both
309
of these let’s see what mid journey comes
310
out with now the reason I do these
311
two is because I want one my setting
312
and the feel of my setting and second
313
my character they are at least the two
314
things that you need okay because that’s the
315
bare minimum that you could ask for to
316
start getting one I want to know what
317
my character looks like develop that properly for
318
my mood board and kind of the color
319
and feel for this movie so you need
320
two images here okay let’s have a look
321
at these so I like things like the
322
composition for this and a girl more like
323
this okay let’s generate it again I’m going
324
to say cyberpunk and I’m going to also
325
take away that eye glowing part because it’s
326
concentrating on that and I can add that
327
later let’s do it again let’s first let’s
328
do cinematic shot of a you’ll find when
329
you’re generating these images this is what takes
330
the most time going back and forward to
331
get them perfect and I’m only generating one
332
at a time usually I would generate and
333
I’ll show you in future sessions generate multiple
334
at the same time okay let’s take a
335
look at this I like this one I
336
think the most let’s take a look at
337
some more okay I’m going to run with
338
this for this example now I’m going to
339
just while I’m here hit strong which means
340
to regenerate it again but not change it
341
completely and subtle again I’ll show you this
342
in future sections let’s get our images again
343
okay here is the strong remix of this
344
this one this one this one this one
345
this is the subtle okay I quite like
346
this one you know but you can’t see
347
the world that she’s in but we do
348
have the establishing shot I like these colors
349
I like what she’s wearing and this girl
350
okay let’s download this these are my two
351
shots now you can use whatever it is
352
that you want to to place these so
353
I like to use photoshop you could obviously
354
you could use something like canva you could
355
use print these out you keep them on
356
your desktop you don’t need to make an
357
actual mood board if you don’t want to
358
but I like to do this let’s grab
359
on my images right here put these on
360
there okay and then I also like to
361
okay so here’s a very rough version you’d
362
obviously have more like eight to ten shots
363
or something on your mood board but here’s
364
kind of what I’m going with for this
365
film I’ve got this kind of pink hue
366
to everything you know in the style section
367
we talked about cyberpunk and the 80s kind
368
of feel almost on it but that futuristic
369
cyberpunk that mad max kind of feel this
370
works with exactly the kind of style so
371
as long as when I’m creating everything it
372
fits that this mood board then I know
373
I’m on the right track so we did
374
the mood board now I want to do
375
my storyboard okay again you may not have
376
to do this but I like to do
377
it to make sure I’ve got what I
378
need so let’s go back to the script
379
that we came with awakening opening shot so
380
I’ve already got that for my mood board
381
okay the next one quick cuts of a
382
young girl looking at her hands and objects
383
around her so what I want is I’ve
384
got that shot of mirror already and I’m
385
going to have another shot of her close
386
with her hands up or something okay as
387
if she’s got these lights coming and her
388
eyes glow so let me go back into
389
mid journey I want that same girl I’ll
390
show you how to use this all properly
391
later so I’m going to say this young
392
girl face on shot she is looking at
393
her hands there is a light glowing from
394
her hands and her eyes glowing let’s take
395
a look at these images right here we
396
know from limitations that hands are often very
397
difficult for AI pretty good that’s good let’s
398
take a look yeah that’s a nice one
399
too isn’t it this one more coincided with
400
the shot on the next on the last
401
image that we produced I quite like this
402
one maybe this okay let’s download that I’m
403
going to do I’m going to I’m going
404
to show you back here in so if
405
I start having my storyboard like this so
406
I’m going to have a shot of the
407
landscape right here then I’m going to have
408
maybe we’ll fade into this as they’re speaking
409
then I’m going to have our new shot
410
we just created okay so I’m starting to
411
build this up here if I just go
412
back to back to chat gpt I can
413
see so we have on a distant planet
414
with Deereon so we have a shot of
415
the planet and a young girl objects around
416
her room I’ve changed that onto the top
417
of a skyscraper she always felt different okay
418
then everyone is standing alone on a rooftop
419
things swirling around so I’m going to have
420
two more shots here let me have this
421
girl and I want to have a close
422
-up of her eyes so let’s use that
423
same prompt this young girl extreme close up
424
of her eyes glowing super power okay let’s
425
run that and then I also want to
426
have this young girl wide shot establishing stood
427
on top of a skyscraper on a futuristic
428
planet a skyscraper futurist at dusk with objects
429
floating around her and her arms out cyberpunk
430
sci-fi at dusk okay first image I
431
wanted to have yeah exactly this let’s have
432
this one right here download that one and
433
then let’s see what happens when I want
434
this final shot and then I won’t go
435
through and do the whole film here because
436
you’d be watching an hour and a half
437
long tutorial but I can show you how
438
to start making this early part of the
439
from kind of here maybe to here-ish
440
with the voiceover and some sound and some
441
of these shots and high-res quality and
442
then you just do exactly the same thing
443
to finish off this whole video so let’s
444
continue okay so I’ve generated a lot of
445
these images let’s go through some of these
446
okay like this is the nicest one but
447
it doesn’t look like let’s keep going let’s
448
keep going okay I like this image the
449
most but it has to be a girl
450
here okay so what I’ve been able to
451
do here is manipulate that image I’ll show
452
you in the future lectures to change this
453
into pink hair and then make sure she’s
454
wearing a jacket something like this this is
455
that final shot that I want so let’s
456
download that now the next phase on to
457
this which you don’t have to but it’s
458
preferable to do is if you want to
459
depend if you need it is to up
460
res the quality of these so the image
461
quality is pretty good but not amazing now
462
you can do that inside here and you
463
can do it with some external tools so
464
let me just go through and I’m just
465
going to up res all of these images
466
that I want to use inside here first
467
so what I can do is I can
468
go up scale subtle yes that’s the one
469
I want and I can go through and
470
just find all the shots that I chose
471
those four or so shots and just get
472
those images better so these are now the
473
upscaled version these five images here let me
474
show you we have this one and then
475
this image this image this image and this
476
image to tell our story so I’m going
477
to download all of those and put them
478
into my storyboard quickly so however you want
479
to display this once again I go back
480
to here and I start reading the script
481
and make sure it makes sense because what
482
it allows me to do if I have
483
these in somewhere like photoshop a storyboard and
484
you’d have lots more of these is to
485
make sure that this is telling a story
486
established here girl sat there something’s glowing from
487
her hands her eyes go these are floating
488
it tells a story it may be that
489
these aren’t all of the shots you may
490
want to go from this to this this
491
back to this or something and then maybe
492
an establishing shot of this and zoom in
493
but it lets you tell a story so
494
you can start to do the next phase
495
which is to make video from this now
496
you could upraise this I just did it
497
with mid journey you could use something like
498
topaz right here I’ll show you this later
499
but look at this software here see how
500
good that does here and I can actually
501
do one of our images right now I
502
can show you topaz gigapixel let’s add one
503
of our images that we just did right
504
here let’s do the let’s do our establishing
505
shot actually I quite like to do those
506
so let’s do that let’s let it just
507
upscale right here and now if I drag
508
these along so you can see it looks
509
quite pixelated there and it just moves and
510
gives each line purpose yeah you see it’s
511
doing that much more in there so you
512
could use something like that there is a
513
cost for this this is a trial mode
514
what to see if you do a free
515
trial but I’ll show you the paid for
516
version later or if it’s not needed you
517
could just be using of course what we
518
did in mid journey here it’s completely up
519
to you and your project so now we’ve
520
got our images you’d obviously have a lot
521
more and now we have upscaled them let’s
522
go on and start making these into video
523
for these I use all kinds of tools
524
but I’m going to show you once again
525
the same as the last tutorial where we
526
did this runway I’m also going to use
527
Pica I’m going to use Hedra and I’m
528
going to use Hapier so let’s start with
529
runway let me grab our first shot that
530
I want to start putting down on our
531
timeline that’s our establishing shot I’m going to
532
actually I’ll put these side by side with
533
another software and you can see how well
534
they do this is a drone shot camera
535
moves in that’s all the direction I’m going
536
to give it and see what it says
537
let’s do the same thing in Pica right
538
here now Pica actually I use that mainly
539
for lip syncing I can show you later
540
let’s use Hapier right here I’m going to
541
do an image to video let’s choose it
542
and then I’m going to just give it
543
exactly the same prompt that I gave here
544
so they’ve got the same chance let’s put
545
that in there and generate and let’s go
546
back to runway let’s see what it’s done
547
here with our shot oh nice it was
548
straight through the middle into one of these
549
that’s really nice I really like that shot
550
beautiful let’s download I want to use that
551
shot for sure let’s have a look see
552
what Hapier did okay Hapier what did you
553
do here oh nice almost a very similar
554
thing a bit slower let’s play it again
555
nice I’m happy with both those generations so
556
what I’m going to do now is I
557
will just go through and I will put
558
in all other shots that we generated across
559
these two platforms I probably may only just
560
use this one and then we’re going to
561
see all the shots at the end so
562
all of my images have now generated into
563
video I can show you and I had
564
to do a couple a couple of times
565
so this is the shot we have of
566
the girl is sat here really nice she
567
looks up we move in slightly that’s fine
568
for this trailer really good I like it
569
but this one did not go so well
570
it started to make a candle it’s not
571
a candle her hands were glowing obviously so
572
I redid that one again and let’s let
573
me show you that shot so now we
574
just move into her face slightly nice and
575
then this is just a shot I wanted
576
of her face with her eyes glowing that
577
one’s going pink great and then I did
578
two versions of this I think I like
579
this one better we zoom out and we
580
see her on top with all of the
581
objects floating around her on top of the
582
city on the skyscraper and this version zoomed
583
in to it but I think I like
584
the other one better so I’m going to
585
download those and then we can start putting
586
this together on a timeline which is exciting
587
now the next step you could be doing
588
here is of course up resing even further
589
I’ll talk to you about that in later
590
sections but let’s get on and show you
591
how we start putting this together now use
592
any software that you want I’m using Premiere
593
Pro right here like I showed you in
594
the last lecture use CapCut I don’t know
595
Filmora any DaVinci Resolve it really doesn’t matter
596
there’s nothing special that you need here for
597
AI video so let me just drop in
598
first I’m going to put in our narrator
599
voiceover that we did drag that on let
600
me place that into here and have a
601
little listen zoom that in on the distant
602
planet of Erython nice let me just turn
603
that up slightly where every secret is buried
604
deep okay let’s begin putting in our shot
605
shall we so the first one was this
606
establishing shot that I want to put on
607
so let’s drag and put that in I’m
608
going to then I didn’t up res this
609
but we’ll talk about that later I’m going
610
to do this with a bit of a
611
fade over the top and then make that
612
slightly more so it looks something like this
613
on the distant planet of Erython where every
614
secret is buried deep let’s do that put
615
the dissolve out again I want to get
616
my next shot which was the girl sat
617
on top of the skyscraper okay let’s play
618
that on the distant planet of Erython where
619
every secret is buried deep one is about
620
to surface okay let’s change that so I
621
want that to fade out right there because
622
I want her hands to start to glow
623
right now let’s do that put in our
624
next shot is about to surface she always
625
felt different until one great let me cut
626
to her eyes a moment changed everything a
627
power she never asked okay let’s have a
628
little what we have here without any sound
629
yet so we haven’t really created an emotion
630
yet but we have our visuals let me
631
play that through for you on the distant
632
planet of Erython where every secret is buried
633
deep one is about to surface she always
634
felt different until one moment changed everything a
635
power she never asked for and secrets her
636
family kept hidden some powers lie dormant okay
637
so this is obviously all in one piece
638
you’d probably want to cut this like it
639
may say this like if we go back
640
to our script she always felt different than
641
the aerial shot of that fast paced scenes
642
you’d have more shots in here but we
643
haven’t done that for the sake of this
644
tutorial so I’d probably cut this move this
645
along and there’d be soundtrack between this but
646
you can see how this is coming along
647
really nicely and what I want to do
648
is go through to the next stage which
649
to add some sound to this because that’s
650
how you really get emotion with a video
651
so let’s go to some of those tools
652
firstly I’m going to go to suno and
653
there’s some other ones I’ll show you later
654
you saw this in the last in the
655
last tutorial I’m going to do a sci
656
-fi soundtrack for a trailer dark mysterious epic
657
intense instrumental only let’s create that and see
658
what suno comes up with and then we’re
659
also going to go back into 11 labs
660
and we’re going to get some sound effects
661
for some of these so I want some
662
sound effects of like futuristic city flying cars
663
let’s see we can generate a sound effect
664
for that and let me go back to
665
suno and see what they’ve got for us
666
yeah that’s quite nice let’s see what this
667
one does well I think I like that
668
a bit more I’m going to download that
669
and use this one although I’m going to
670
run it again because I just like to
671
see the different versions that they come up
672
with suno do amazing amazing versions of songs
673
and these are all obviously copyright free for
674
you they’re yours to use let’s go back
675
in while we wait for that to generate
676
and let’s see what these futuristic sound effects
677
are sounding like let me do this from
678
a distance and let’s generate that back in
679
the suno let me see what they had
680
for us okay this has a nice start
681
to what was this one again okay I’m
682
going to go with this we’re going to
683
download that one and let’s have a look
684
at some of these sound effects again I’m
685
just going to get some of these have
686
well I think I’m having too much concentration
687
on the word futuristic flying cars let me
688
just do city noise city noise from a
689
distance let’s generate those let’s go in and
690
download this okay I quite like this with
691
a bit of animal yeah okay let’s use
692
that one and then the other sound effect
693
I want for this is hovering futuristic I’m
694
just going to say laser okay and see
695
what comes up with this for when she’s
696
floating objects in her hands and perhaps when
697
the light appears from her hands also sometimes
698
you have to put some pretty obscure instructions
699
in here let’s see what what it comes
700
out with I like this first one and
701
third one let’s take both of those amazing
702
let’s go back to our project I grab
703
our soundtrack and just put that underneath here
704
let’s make sure that’s not too loud let’s
705
play it on the distant planet of aerothon
706
where every secret is buried really coming about
707
there and then let me get some of
708
these uh city noises right here this one
709
with the birds that we had on the
710
distant planet let me make that quieter of
711
aerothon okay let’s listen to this on the
712
distant planet of aerothon where every secret is
713
buried deep great and put another effect just
714
to fade out that sound we had our
715
sound effect of some hovering right here let
716
me put those in let’s have a listen
717
again and this one all right let me
718
grab that one into like here is about
719
to surface she always felt different until one
720
moment changed everything nice all right and then
721
this is the other sound effect that we
722
had that I just want to make that
723
slightly quieter the power she never asked for
724
and secrets her family kept hidden some powers
725
lied all right nice so let’s have a
726
little listen to this now we’ve started getting
727
some emotion behind it on the distant planet
728
of aerothon where every secret is buried deep
729
one is about to surface she always felt
730
different until one moment changed everything the power
731
she never asked for and secrets her family
732
kept hidden some powers lie dormant until the
733
moment they awaken okay great so you can
734
see how we’re doing that we’re putting it
735
all together now really building an emotion we’ve
736
got high res images I could then go
737
ahead and make that even better quality video
738
we got our sound effects that really add
739
something to it don’t they as well as
740
a soundtrack narration is really coming together really
741
really nicely I’m really happy with this you
742
would of course in your own project go
743
into this way more with way more shots
744
but you’re seeing every step here that I’m
745
telling you about for the best workflow I
746
hope that was useful and entertaining you can
747
see every step now in a more advanced
748
AI workflow and you’re going to know what
749
it is that you want to use and
750
start using and the tools perhaps what you
751
need for your own workflow now I’ve shown
752
you in brief these tools and we’ve gone
753
over them you’ve seen me do workflow but
754
you don’t yet know how to use these
755
tools exactly do you and that’s okay because
756
that’s what every next step is in the
757
next lecture I’m going to briefly go over
758
what my workflow is going to be for
759
this course that you can follow along side
760
by side with and then we’ll have a
761
little task for this and then we get
762
into the next section of the course and
763
we’re going to start actually learn how to
764
use these tools and you are going to
765
start making your own AI video
— Course Project: Step-by-Step; the Workflow for My Course Project —
1
I’m just going to show you briefly the workflow that I’m going to be following on for my own
2
project throughout this course. You’re going to follow along step by step. I’m going to
3
cover everything and all tools in each step that I’ll be using or possibly or could be
4
using or you might like to use. And we’ll go through them step by step so I can bring
5
up the slide here and you can see pretty much what I’m going to be using for every step
6
of the way. That’s from idea generation, script structure, audio, mood boards, storyboards,
7
images, video, upresing, edits, effects after I’ve edited it, of course. And you can go
8
on to the site right here that you have access to. I’ve shown you before forward slash AI
9
video workflow. And I’m going to be following this along. Step one, two, three, four, five,
10
six, seven, eight, nine, ten. So the next sections of the course are going to follow
11
along with this. The next step is idea generation. We go over these five tools. Then I’m going
12
to make a script and structure. We use these tools, then audio, these tools, mood board,
13
storyboard with these tools, image generation, all these whopping nine tools, video generation,
14
all these. And then I’m going to upres and I’m going to edit and then do some sound effects
15
to do the final project for this. So we’re going to come up with what the project is
16
in the next stage. I encourage you to go on to this site and check these out because I’m
17
going to set you a task in the next lecture that’s going to correspond and coincide with
18
this just to know what’s coming up in the course and what you could be doing for your
19
own workflow. So head on over to the next lecture and you are going to see the task.
— Task: Design Your Own AI Video Workflow —
1
So, your task for this section is a test and recap task. Unsurprisingly, I want you to
2
develop your initial, very basic, we’ll develop this more in the next section when we’re using
3
AI tools for this. I want you to plan out your workflow based on your end goal. So,
4
start thinking, and you probably already know this, am I going to be creating short films
5
that are going to festival? Am I going to be creating adverts? Do I want to make social
6
media content for YouTube or something? Start thinking about what your end goal is and backward
7
engineer that to work out what your flow is going to look like. If I come onto site here,
8
AI video workflow again, you can start working out, okay, I don’t need to generate ideas,
9
I know what they are, but I do need scripts. I’ll probably be using chat GPT or Gemini.
10
Some audio here, I’ll need a soundtrack and a voiceover. I’m not going to do a mood board.
11
I’m just going to wing it and go along. I’m not going to make a storyboard. I’m going to do image
12
generation and videos. I don’t need to upres it because I am not going to put this to festival
13
or anything. It can be for social media. I’m then going to edit it. And you can start to work out
14
probably what your workflow is going to be like and the tools you’re going to be using. And in
15
the next stage, we’re going on to this section right here. So, if you decided, actually, I don’t
16
need idea generation, you can probably skip the next section. You don’t have to and go over and
17
gain some knowledge about how to use some of these tools. But if you already know exactly what it is
18
you want to make, you can probably skip forward and go straight to script or structure. But start
19
working out what it is, your workflow you’re going to need so you can best utilize what’s in this course.
— Generating Ideas with AI: Best Practices Unveiled —
1
This section, we’re going to concentrate on generating ideas for AI videos using certain
2
AI tools. Now, I mentioned this in the last section. If you already have your ideas, you
3
may want to skip this and go on to the scriptwriting, which is the next section. But I applaud you
4
to please just at least watch this lecture, because I’m going to talk about some best
5
practices that are needed when generating ideas. Not all ideas will work out very well
6
with AI, and you should use some limitations and things in place to make sure that you
7
are generating the correct ideas for what’s currently available and the ability of AI
8
right now, if you like. So let me just let you know the objective for this lecture is
9
to fully understand best practices for creating ideas, generating ideas, not just the tools
10
you’ll use, but limitations or things that you need to know. So the objective, you’ll
11
have a better understanding when you are generating ideas for what’s possible, plausible, or best
12
when creating AI. So let’s jump into the slide quickly. Here’s a slide for the lesson today,
13
things I want to talk about. Before I go over onto the screen here, I’m going to share this
14
page that I have about video ideas that you have access to and how to use every single
15
tool in depth to generate ideas all listed here. Let me go back to the slide and just
16
go over some best practices for you. The first point is this create two limitations. So there
17
is limitations of AI. And I spoke about this earlier in the course, things that AI are
18
currently good and not so good at and with this will all change, but at current at present
19
right now. Now, AI obviously has limitations. It does some pretty amazing, realistic images
20
that you can create some real almost photo realism. Actually, I’m going to go as far
21
as to say actual realism, really good. And when you convert that into video, it’s also
22
normally pretty good, but not 100%. So if you were say creating, I don’t know, animation
23
or something just like this that I have on the main page, or perhaps you’re doing some
24
stop motion, clay style animation, that’s obviously a lot more forgiving than if you
25
were doing something like this, which still looks real is realism. But you could tell
26
it’s not 100% perhaps with the movement more like a very advanced video game kind of feel.
27
But don’t be put off about doing realism. We are there, you can do it, you just have
28
to be careful with how you utilize it. Now, you should create probably in a more forgiving
29
setting more often than not, not always, and it’s completely up to you. But for example,
30
a fantasy or sci fi setting, that’s not a real setting, not a real life place is far
31
more forgiving. If I was to use somebody’s living room, and you’d be able to tell probably
32
quite quickly if it’s real or not by small nuances and details in decor in what’s around
33
the room, etc. Not always, but just more often than not. Whereas if I was to give you a big
34
fantasy setting sci fi, there is no marker of which us to guide what is real and what
35
is not. So definitely, fantasy or sci fi settings are very, very forgiving, or perhaps an external
36
outside setting like the Wild West, and it’s very generic, that’s also going to be far
37
more forgiving. Now you can avoid areas that AI is less good with now it might take a little
38
bit of time to get better with, for example, lip sync dialogue. Now I can and I will show
39
you some different tools like Pika, and some other ones you’ve been using, where you can
40
lip sync and it’s fun to do. But you definitely 100% will see that it’s lip synced realism
41
lip sync is still in its infancy, but will progress very quickly. But right now, limit
42
the lip sync if I were you on your AI videos. Also, I’m going to call this extreme or specific
43
movement. You’ve seen me already in runway or other AI video platforms. And I can tell
44
them to zoom the camera in to movement, track left, right, etc. slow motion is quite what
45
I’m a fan of. If I’m telling the character to move in a certain way, it’s slightly more
46
difficult. Or if I need a very specific movement, I need a character to pick up an answer a
47
phone I need a character to do something specific. Right now it’s going to be quite difficult.
48
We can get around this and I’ll show you exactly how we get around it. When we’re creating
49
videos, we just have to be very clever with the way we’re doing this. But please try to
50
avoid specific movement that’s needed. That’s extreme and too much dialogue. Now this obviously
51
links into the lip sync, but voiceovers are great. And some are extremely realistic. None
52
of them are exactly the same as spoken word by humans yet. Pretty close, pretty — way
53
up in the 98 percentile or something very, very good, but too much dialogue. If it’s
54
too much, you’re going to probably become aware when there’s two characters back and
55
forth and dialogue that perhaps you’re going to be more aware that they are generated than
56
real just in the reactions from one character to the other. Again, you could rework and
57
rework that and get that down to as fine a point as possible. But still, it’s best practice
58
to avoid too much dialogue if possible. Now, some other things I want to say about best
59
practices do create what you know about. If you are a massive fan of sci-fi, then great
60
create some sci-fi. If you’ve never watched a sci-fi movie before, it’s not always best
61
practice to create what you don’t know about just for the sake of it. I’m a big fan of
62
Shane Meadows. He’s a self-taught director from the UK. And if I go on to screen, actually,
63
I can show you here. There’s a great interview with BAFTA, which is the British Academy for
64
Film and Television Awards. And he talks about, and you can see many of his interviews,
65
how he just creates very simple films. I’ll put up some on screen. He creates about some
66
of his most famous ones called This is England and a series called This is England. He just
67
creates very simple scenes inside basic settings about everyday events because he knows about
68
that. He talks about places he grew up and about things he experienced. He doesn’t try
69
to make a movie set in Hollywood, try to do a movie set in Monaco or somewhere very elaborate,
70
for example. He doesn’t know that world. So just between us and when creating videos,
71
I always say it’s best to create what you know about. It doesn’t mean you’ve necessarily
72
lived it. None of us have lived in a sci-fi fantasy setting, but you should have an interest
73
in it at least. And the last point, I think it’s definitely important to say that right
74
now in these early stages of AI, there’s something called story versus gimmick I want
75
to bring your attention to. Now, the novelty of having an AI video is still that, a novelty,
76
and it can sometimes be slightly gimmick. We can create videos with celebrities of Donald
77
Trump saying something or doing something funny that isn’t real. And that’s the gimmick
78
behind it. There’s nothing wrong with it. And it has a place and they’re fun and very
79
impressive. But that’s the gimmick style AI that tags onto trends or people’s celebrities,
80
etc. And the gimmick of the fact that it’s AI is what stands out. And that’s quickly
81
going to change, I think, as we become more used to AI, the novelty wears off and it becomes
82
better and better. We’re going to concentrate once again on story. Story has to be the most
83
important thing. So we’re going to switch from AI as a gimmick. Wow, this is AI generated.
84
This is really good, as opposed to this is just a film, a story that happens to be made
85
with AI. How good is the story? Now, let me come back over onto the screen and I’m going
86
to show you this page that I’m going to give you access to. It’s a AI video dot school
87
AI video ideas. And here’s all a page about how AI video ideas, the future of brainstorming.
88
You may have an idea. You may have no ideas. You can use AI to elaborate and generate ideas
89
that you’ve been thinking about already and what you know about. If we go back to our
90
our sheet, our slide still create something that you know about or have a passion or interest
91
in. But you can definitely work one on one with AI to generate and develop these ideas
92
further. And I’m going to show you all the different tools here. If I scroll down, you
93
can see here’s an overview of the five tools and general differences. ChatGPT, great, highly
94
versatile in generating creative structures and ideas from scratch, focus heavily on defining
95
narrative and language. Also have a conversation back and forward. Develop this point. What
96
about that? What is this based on your previous conversations with ChatGPT? I love that. Gemini,
97
which is by Google, excels in providing factual accuracy and deeper context, making it suitable
98
for data heavy or educational video content. If you’re definitely doing videos, even say
99
a documentary style video, something that has a lot of information and context to it.
100
Gemini is a great source for this, for generating ideas and then elaborating on it, which we
101
talk about in the next stage when you come to script it or structure your video. Perplexity
102
is more focused on delivering concise responses and is ideal for finding quick ideas or verifying
103
trends in specific niches. Some of you will want to use trending topics and that’s fine
104
and just be creating for what’s trending to get as much eyes on traffic and traffic
105
on your content as possible. Perplexity is really good at that. Now, Claude, much like
106
ChatGPT, leans towards conversation style, making it better for brainstorming sessions,
107
generating ideas, going back and forward, a bit like ChatGPT, but perhaps even more
108
so to an extent. You can go back and forward and really as if you’re working one on one
109
with someone who has all the knowledge of the internet to start generating ideas. And
110
Copilot from Microsoft is integrated with various applications and Excel is providing
111
relevant suggestions based on user inputs. Perfect for refining existing ideas. If you
112
input something, I don’t know, if you input a fact, maybe you’re making a documentary
113
or educational video, input a fact. Copilot is great at verifying and elaborating on that
114
and working together on those. So I’m going to break this down. If you scroll down here,
115
you can see these are the five tools right here. And if I just push this arrow right
116
here for each one of them, it gives a brief overview about what ChatGPT is in this example,
117
ways to use it, finding trends for exploring popular niche specific topics, generating
118
ideas around the news, discovering what’s popular, exploring ideas for how to videos,
119
developing niche topics, et cetera, et cetera. And with each of these, I’ve given you an
120
ideal prompt that you can go and test on ChatGPT or obviously tweak it and change it
121
slightly even use AI to tweak and change it slightly based around the ideas that you have.
122
So I’ve done this for every single one here. If I go down also on this section, let me
123
stick with this. If you want to develop a specific idea further, here’s an ideal prompt.
124
For example, I want to make a video about the future of electric vehicles. Can you outline
125
a script or key points I should cover, including on engaging intro? So that’s more like on
126
scripting section we’re going to do next. But you can see from this example what I mean
127
to show you is that you can elaborate further on an idea. So you could be saying, I want
128
to make a video about the future of electric vehicles. Can you tell me about what the future
129
looks like? Structure this down in 10 points. Give me some ideas about what’s the best way
130
to structure a documentary about this or what’s a key point I could make a whole video about
131
specifically, not just electric vehicles in general. And then ideal prompt structures
132
for chat GPT. So clearly define your purpose, provide a specific niche or topic, request
133
the format that you want this in, and then any additional details. This is the ideal
134
format for when you’re prompting with chat GPT. And then just how this differs slightly
135
if you’d like to know compared to the other platforms that I’ve listed below. Now I’ve
136
done this for every single one, specifically for Claude, finding trends. And obviously
137
these are, here’s the ideal prompt structure for Claude, open with a goal or topic, which
138
is slightly different from chat GPT. Perplexity in exactly the same way. But this time it’s
139
better to be clear with direct questions, specific topic needs mentioned, focus on popularity
140
or trends, as we mentioned before. Now Gemini, which is great for information and verifying.
141
You could use this, for example, start with your clear objective. I want to verify if
142
this is true and then give it the fact that you want to. And there are examples here of
143
prompts that you could be using for Gemini and then co-pilot. Lastly, some of the ideal
144
prompting things you need to input your context or your document. So I could input, for example,
145
sometimes I use this after I’ve generated, say a script structure with another AI platform,
146
like I have with chat GPT. I’ve sometimes then used co-pilot, input that document, that
147
script and ask it to verify, fact check and make sure it’s true to the best of its ability,
148
because that’s something that’s very important when you’re making factual or information
149
videos. So this was just an overview of all the different tools. Please go over and check
150
out this website and you can go and see, start playing with these to generate ideas. Always
151
in every lecture, refer back to this to see what the best practices for prompting are.
152
And in the following lectures, I’m going to go through. Next, I’m going to show you some
153
amazing video channels and where to find, look at some inspiration. The best thing to
154
do is to look at inspiration of current AI creators out there. And then we’re going to
155
move on to specifically using each one of these tools to generate ideas. And we will,
156
for this course, generate ideas for a project I’m going to be working on throughout this
157
course that you can follow along either with the exact same project if you want to, or
158
with your own project to follow along step by step. So I’ll see you over in the next
159
lecture. Let’s have a look at some examples of existing AI creators and the exciting things
160
they’re doing so you can get some inspiration for yourself. I’ll see you over there.
— Where to Find AI Videos for Creative Inspiration —
1
So, in this lecture let’s take a look at some existing places on which to find some AI videos
2
for inspiration that I want to show you. It’s shocking sometimes how many of us don’t research
3
what’s already out there to understand how something works and what perhaps we could
4
be doing. All filmmakers have studied other filmmakers in the past when you’re going to
5
film school or whether they’re in the industry. It’s exactly the same with AI video. To think
6
that you could make a video without watching other existing AI videos is a funny concept.
7
Please don’t any of the students for this course do that. Familiarize yourself with
8
some AI video. So, I’m going to show you some best ways in which to find these. Now, YouTube
9
is the best place to find videos on anything. But finding it, there are 300,000 hours of
10
content uploaded to YouTube every single hour of the day. Most content will get lost forever
11
and no one will ever see it. So, please, I’m suggesting the best way to find it. And this
12
is the reason I’m also not going to show you specific channels. By the time you get to
13
watch this course, then that channel may have gone or the content may have gone. So, if
14
you search AI film in the tag, it’s going to come up here. Let me show you everything
15
that’s tagged hashtag AI film. This is probably my favorite tag to try and find good film
16
AI content, not AI video that often comes up with AI video generator, not just AI, but
17
AI film. So, when I click that, something like this comes up and there are some I recognize
18
so many of these channels that I follow and watch their stuff really fun. So, there’s
19
lots of and I’ll show you another one trailers. For example, if I hover over this, someone’s
20
made a Scooby-Doo trailer right here. They are film that made Hollywood cry. Wow, there’s
21
some really good films. I love this is quite common to do this 1950s pano visions type
22
stuff. For example, Star Wars in 1950s. Now, because this is obviously YouTube, people
23
are trying to get as many eyes as possible. So, if you trend jack on something like Star
24
Wars, then it’s going to do better for this platform. But this is a good way to see what
25
people are able to do when they’re making AI film. In fact, let me just show you some
26
of this. Yeah, let’s play some of this. So, you can see from these visuals just on the
27
small amount of stuff that we’ve done right now, where in the last lecture, I was section,
28
I was showing you workflows. You can probably understand how these were generated and how
29
you could generate something as good as this. I can also see another tag that’s being used
30
here, AI cinema. So, let’s search that also. But AI film, one of the best ways to go through
31
I think and scroll through and see what’s happening in the AI space. You’ll see a lot
32
of trailers on here, but they’re great because people are doing great shots that don’t require
33
a lot of dialogue and things. Great way to get yourself inspiration. Let’s go and do
34
AI cinema and let’s find that tag. So, here you’ll find a lot more short films, psychological
35
thriller, AI short films made. Okay, great. And some of these, you would never find this
36
if you were just searching for an AI film because it has 63 views on a year ago, but
37
it might be really good. It just gets lost in the 300,000 hours of content uploaded to
38
YouTube every hour of the day. But some of these are really fun to watch. Lots of sci-fi
39
as we spoke about in the last section, how that’s way more forgiving, but not all. Another
40
fantasy setting. Yeah, lots of sci-fi, horror. Great. These are really good. Oh, someone’s
41
made The Great Gatsby. But was that with cats? Was that The Great Gatsby? Great. Yeah. Okay.
42
Lots of funny stuff. Really, really good. All right, let me show you some other tags
43
I want to show you. Yeah, AI film. That was the main one. If you search just AI, it’s
44
going to come up with lots of news about AI. Some stuff you can scroll through and see
45
some of the comedy things that are coming out here. But most of it is going to be talking
46
about AI and AI news. Another tag I like is AI video. Again, sometimes you have to scroll
47
through it. But there are lots of perhaps using what I mentioned in the last lecture,
48
the gimmick part of AI video. It’s a lot of people doing news spoofs. Look, you’ve got
49
Donald Trump and Elon Musk having a dance off there. And Will Smith eating spaghetti.
50
Lots of funny stuff there. If you’re into comedy, AI video is perhaps sometimes the
51
best place to find these kind of videos. Another tags AI trailer. AI trailers or concept trailers,
52
they call them, are really popular. People are making trailers for, for example, movies
53
that don’t exist or movies about to come out, perhaps like the new 007 film that hasn’t
54
been released. Or here’s a 1950s version of The Terminator, Avatar 3, which is not a movie
55
exists. Robocop, Predator. So you can see that people are making, here’s a trailer with Musk
56
versus Zuckerberg sci-fi movie trailer. And some of these low views at less than a thousand views
57
for some of these, and some have obviously almost a million views. And this is the best
58
way using these tags to find them. Now, a good way, a good reason to be watching AI trailers
59
is with the limitations we spoke about in the last lecture, where we said, you perhaps don’t
60
want a lot of dialogue. You want to have scenes with not specific movement. Trailers are a great
61
way that really utilizes all those limitations to their advantage. Now, please also search,
62
for example, Petapixel here is showing an article about the world’s first fully AI generated movie.
63
I’m not sure that’s true. And I don’t take that fact, but do look for AI movie news,
64
because there’s going to be more full length movies generated. In fact, if I just have a
65
little look at this. So Claire, what’s your stop? The city of lights. I’m hoping it’s a new start.
66
I don’t think we have to pray for long. So an AI generated movie there. I think you said it was
67
made by a Chinese company, TGC or TCL, Technology Group Corporation, Chinese owned company,
68
sells consumer electronics. Yeah, making this, but being aware of movies that are being generated
69
is a great way to see what’s being done in the space, which brings me nicely onto,
70
if you search AI Film Festival, this one is actually run by Runway, which is the tool that
71
I’ll be showing you predominantly in this course. And if I, so submissions are open now. So the movie
72
we are making in this course, I’m going to submit this for the AIFF in 2025. So wish me luck.
73
And we’ll be making that together. And if you scroll through, you can see for the last two years,
74
they’ve had submissions and you can see not all of these movies are generated entirely with AI.
75
Some have AI parts, but some definitely do have only AI, which is what I’m most interested in.
76
For example, where do grandmas go when they get lost is a funny concept. And you could see by
77
looking at some of this imagery, we have been talking about in the last couple of lectures,
78
you could generate this yourself, probably with Runway. As soon as I’ve taught you at all in a
79
couple of lectures time, you can see though from the last section, exactly how this is constructed.
80
It’s not completely difficult. And this one is submitted to an AI Film Festival. So this is what
81
I wanted to show you in this lecture. Please go ahead and check out all these resources, YouTube
82
and the tags. And also depending on your want, if you want to create AI trailers for YouTube, great,
83
go and watch loads of those. If you want to create short films, then go on places like this, go and
84
check out the Film Festival submissions or check out AI News. That’s where to find the best resources.
85
So a little bit of homework for you, I would suggest go ahead and watch consume for the next
86
week, however long go and consume loads of AI videos, put loads in a playlist for yourself,
87
watch them and see what people are doing. See if now we’ve done the workflow in the last section,
88
see if you can work out how they were made in regards to workflow. Do you think they’ve got
89
the images, then they ran the videos, where’s the music come from, etc. Please go and check those
90
out. Okay, I’ll see you on the next lecture, we’re going to start actually generating some ideas for
91
ourself. And then at the end, I will generate the idea I’m going to use on this very course
92
to submit to this festival. So I’ll see you in the next lecture.
— ChatGPT: Your Guide to AI Video Brainstorming —
1
So, welcome to the first tool of the five we’re going to be talking about for idea generation.
2
This is ChatGPT from OpenAI. If you don’t have an account, haven’t used this before,
3
if it is Google search ChatGPT, chatgpt.com, OpenAI, you’ll find it and you can just log
4
in, it’s free, create an account or log in with your Gmail and you’ll be confronted with
5
something that is something like this. You’ll be able to message ChatGPT right here. You’ll
6
have seen me use this in the last section when we talked about workflows, if you follow
7
along in there. So, this is ChatGPT. Let me bring up the slide and explain what we’re
8
going to do. We’re going to be prompting for ideas and there are certain things we know
9
from certain ways, I should say, in certain tools that we should do this to get the best
10
results. So, to get the best results with ChatGPT, be clear and specific with your prompt
11
by including key details like themes, settings or emotions. Very important. Provide context
12
or background information. If you know some of the background, like if you talk to them
13
like you’re a director, for our case, we are creating ideas for a film. So, give as much
14
background information. I am trying to create film ideas. I want to create something like
15
this, like that. That’s always good for ChatGPT also. Break down complex questions into smaller
16
steps. Even if that’s just using punctuation, you can use numbers or something with ChatGPT
17
if you want multiple things from them. That’s always good to do. And then adding your constraints.
18
If you don’t want something, don’t like something, then you can add that for ChatGPT also. And
19
there are more details on sites. Once again, if I show you, this was the page, the AI video
20
ideas that you have access to. If you go down to ChatGPT, here in the drop-down menu, you
21
can see all the information and example prompts and what’s needed for generating a specific
22
idea or developing one further using ChatGPT. So, let’s do that. I promised last time that
23
what we’re going to do is we’re going to run the same prompt in all five of these next
24
tools to compare them side by side to see what kind of results you get. And then you,
25
after these five, can pretty much choose what your favorite one is for you to be using to
26
generate ideas in the future. So, if I just grab that prompt right now that we spoke about
27
in the last lecture, and we’ll add the pieces in there that we need to add. So, this was
28
the one. Create a set of 10 short film ideas, concepts that explores the themes of. And
29
then we’re going to insert that. So, if I didn’t have it, if I didn’t have what my theme
30
is that I want to use, let me just copy this again and remove it. I could say, generate
31
five ideas for themes around short film. And let’s see, it’s going to bring us five themes
32
right now. So, here’s five themes, time and regret. Well, that’s quite a nice one. Power
33
of silence. Illusion versus reality. Unexpected connections. And human versus technology.
34
Quite nice. So, these are ideas for themes if you didn’t have one already. Probably most
35
of you do have an idea. But I can insert right here. I can say around the theme of, let’s
36
do human versus reality. Nice. Set the story in, once again, in exactly the same way. If
37
I don’t know what that is that I want, what the setting is. So, I’m going to say, generate
38
me five ideas for settings for a film around the theme of human versus reality. So, I mean
39
technology. What am I saying? Technology. Okay. Here are five settings. A futuristic
40
smart city. Nice. An isolated underground research facility. A dystopian farm controlled
41
by drones. All right. An augmented reality city district. A ruined earth monitored by
42
autonomous caretaker robots. All right. So, I quite like this futuristic smart city. Let
43
me go in and put my prompt back in. Sorry, this was, I realized I typed it wrong there.
44
Ignore that. So, there’s a futuristic smart city to put in a futuristic smart city. Great.
45
And focus on a protagonist who is a, and then describe a protagonist. So, I’m just going
46
to once again, copy this again. I don’t have any ideas. Perhaps you do. Generate me five
47
ideas for a protagonist for the film idea above humans versus technology. So, typos
48
don’t really matter, by the way, they can understand. But you see how with ChatGPT here,
49
I’m using conversation. We already spoke about something in the past. And I’m going to say,
50
hey, remember that thing you spoke about earlier? Give me some of that. So, I had to tell it
51
which idea humans versus technology. So, here are five protagonists idea. One’s the disillusioned
52
engineer. Okay. Kind of that back to the future style doc. The rebellious teenager. Quite like
53
that. The tech dependent survivor. The conscientious scientist. And the last one
54
is going to do the rogue caretaker robot from the perspective of the robot. So, I like the
55
rebellious teenager. And of course, I’m just generating five ideas. You’re going to generate
56
as many as you want. Protagonist who is a rebellious teenager. Okay. The film idea should
57
evoke a, and then let’s talk about a mood here. Okay. A bittersweet yet hopeful. Actually quite
58
like that as it is. But in exactly the same way, you could generate me five emotions. Or I did
59
evoke a feeling. Tell me five feelings that film could make. So, the film idea should evoke a
60
bittersweet yet hopeful tone. Include a plot twist in the idea. Nice. This is what I want.
61
Once again, as I’ve just said with ChatGPT, some of the best practices are to break these down. So,
62
I’ve separated these by punctuation here. These full stops periods here to separate this all out.
63
You could structure this one, two, three, four, five. It’s not necessarily crucial, but let’s run
64
this and see what it says. Okay. So, it’s just generating these 10 ideas. Let me go through these.
65
The whispering streets concept in a city where every street light and corner is equipped with
66
surveillance AI. A rebellious teenager named Kai discovers an old radio frequency that bypasses the
67
city monitoring systems. They start broadcasting a message to freedom like the others. Plot twist,
68
Kai realizes that the old frequency wasn’t a loophole and is a hidden backdoor left by an
69
AI designer. Okay. The last rebellion in a world where emotions are regulated by wearable devices.
70
A teenager, Aria, discovers how to override her device. Plot twist, caught in the system of a
71
choice except a permanent suppression of her emotions or sacrifice of her individuality.
72
Okay. And I can go through some of these. Let me just read through some of these. I won’t bore you
73
with them going through them. And then I’m going to come back with the concept that I like
74
the most. Okay. So, I actually quite like number five. I’m also thinking forward thinking to the
75
first lecture in this section when I spoke about limitations. When you start thinking
76
about creating an AI video, you should create only with limitations involved. Some of these
77
are quite complex and would be quite hard to, I mean, these are only the ideas. And from this,
78
you’re going to develop that idea further. And you’re going to make just a short one,
79
two-minute video, maybe with minimal scenes. But I quite like this. The city’s transport system
80
automatically routines citizens based on their predetermined schedule. So, everyone is scheduled
81
by technology, by AI. A teenager, Ray, figures out how to break the routine by stepping off
82
an unscheduled stop, a mysterious decommissioned station. Plot twist, she meets a group of older
83
rebels who have been hindered and abandoned for years, waiting for someone like Ray to find them,
84
reignite their hope. They revealed its station was a hub for resisting the city’s algorithm long
85
ago. Okay. So, I like this as a concept. Generally, this is obviously way early in the stage to be
86
thinking about something like a script. But if this was my idea right now, I would put that in
87
here. I quite like this. Okay, the blue line. So, I’m going to tell chat GPT, this film idea, develop
88
this other to be a very simple story using this idea where a scene could be produced just
89
two minutes long with minimal scene changes. So, this is what the next step I like to do, which is
90
I’m already thinking of my outcome. I’m not making a feature length movie. I’m making a
91
short video and I’m making it of AI. So, I don’t want 100 shots in one minute. So, I’m going to
92
start developing this. This isn’t my next stage. I should develop in my script and structure,
93
but I just need the idea and I need this part for AI just to fully understand. So, it’s given me
94
here two minutes. It’s a sci-fi dystopian. It’s the genre setting a futuristic smart city synopsis
95
in a city where every citizen’s movement is particularly controlled. So, the scene breakdown
96
is here. We have a wide shot of a sleek futuristic train gliding into it. Okay, skyscrapers, Ray’s
97
sitting in the train, staring out the window. It looks tense and restless. A figure’s tapping.
98
And then always the same, always the same. That’s the voiceover of Ray whispering to herself.
99
So, we’re setting the scene that everyone is monitored by AI. You could probably have
100
some announcements over the tannoy or something to set the scene or a voiceover for that. Also,
101
close-up Ray’s face makes the decision. She gets up from her seat, walk towards the exit door and
102
gets off it. And then she’s in the decommissioned station. She notices, she hears the voices of the
103
rebels and then the rebels explain what’s been happening. And the voiceover says that the station
104
was a sanctuary. Okay, and then the last scene here. So, that’s nice. That’s coming up with my
105
idea. From here, this is almost giving me a script here. Not really and it’s not final.
106
In the next stage, I would definitely be developing this idea and you could use ChatGPT for this or
107
any of the other tools. Some of the other ones I’m going to show you in the script writing are
108
specifically good at taking these ideas and making them into a story specifically for film.
109
But that’s how I develop ideas and then just elaborate slightly on the idea I like.
110
But that was a very quick 15-minute version of this. You’d probably be generating ideas and
111
ideas and ideas. Lots of them until you found the one that you wanted. And if these prompts don’t
112
work, remember, all prompts do work. I could prompt anything. I could just say the word
113
the fat dog. I could just say three words in here and it will give me something.
114
And this yes meant to be a certain way to prompt with ChatGPT or any of the tools. So,
115
I could see from here how I’m meant to structure a prompt. But you might find for getting ideas
116
that there’s a slightly better way. And I like this back and forth with ChatGPT.
117
So, go ahead and start generating some of your own ideas with this. And then I’m going to move
118
on to the next one, which I’m going to talk about. Gemini, which is great because they have
119
imagery. So, if you’re quite a visual person like myself, then I like to use Gemini because it is
120
factual and can be script writing and ideas generating. Not so much script writing,
121
I don’t like to use it for. But ideas generating with imagery alongside it and
122
some fact checking too, if you need that. So, let’s go on and let’s talk about Gemini.
— Gemini: Crafting AI Video Concepts with Precision —
1
Onto Gemini now for generating AI video ideas. Gemini is one of my favorite tools and you’re
2
going to see why shortly because of the multimodal kind of capacity that it has in generating
3
imagery creatives alongside text based results just like we saw with chat GPT in the last
4
model. So let me bring up the slide here Gemini. So prompting for ideas now to get the best
5
results of Gemini you have to start just like all AI models really but start by being very
6
clear and specific with your prompts focusing on key details like theme settings or emotions
7
as well as details about characters ideas on specifics if you have any and do take advantage
8
of the multimodal capabilities I was just talking about by incorporating images visual references
9
to inspire creativity both giving it to Gemini and also receiving obviously receiving here we’re
10
getting ideas although you could be inspired by imagery for example and say create a story
11
or idea for stories around an image but that’s not what we’re going to do here. Much like we
12
saw in the last lecture with chat GPT break the down into smaller steps punctuation is the best
13
way to do this I always feel full stops or number these down and then you know there’s going to be
14
constraints with these models obviously there is with chat GPT and Gemini with what you can
15
and can’t be you couldn’t be using anything untoward trying to generate anything untoward
16
anything illegal etc so be aware of that there’s more details on site once again if you scroll
17
down this is the page for AI video ideas if I put away chat GPT from the last one there’s
18
clawed perplexity we’re going over these next and Gemini and you can look at specifics while
19
you’re doing this ideal prompts some prompts examples and differences we’re prompting for
20
Gemini now Gemini is owned by Google that for I think that there is some merit to be used here
21
for things like if I’m generating ideas for YouTube you’re also owned by Google you just
22
like to think that perhaps Gemini has greater knowledge inside the Google infrastructure
23
itself if you’re trying to find trends or what’s working on YouTube I would definitely use Gemini
24
to create ideas or for information because you’d assume that Gemini has access to YouTube and more
25
YouTube data because it is part of Google so let’s prompt together shall we just like we would did
26
in the last lecture here so I’m going to take that initial prompt that we had I’m going to put it in
27
and then also we can change this specifically for Gemini also so this is the prompt from the
28
sheet we had before I wanted to create this create this but let’s actually take the exact
29
prompt that we did with chat GPT shall we so I’m going to use this exact prompt that we had right
30
here this will be where I’ve got the theme of human versus technology if you didn’t see the
31
chat GPT lesson go back and watch that and I explain how I got to these but just actually
32
because I like to use this platform to get myself some visuals for example if I was to do this first
33
part here create a set of 10 short film ideas concept that explores the theme of insert theme
34
so I want to generate themes so generate five themes ideas for a short film let’s start with
35
that so time loop and it’s giving me an image here this is not obviously a great image artificial
36
intelligence okay show me the image here okay post-apocalyptic survival we’ve got this cyberpunk
37
style image here lone figure walking through a ruined city alien encounter small town visited
38
okay that actually looks quite nice I like these visuals dystopian surveillance so I like to use
39
Gemini because I mean sometimes these example aren’t great for these images but I quite like
40
to get imagery around here so let’s do for example this artificial intelligence I’m going to say give
41
me five short film ideas based on this I’m going to paste that back in here and then it’s going to
42
give me the five different ideas the awakening a young AI can find computer outside well okay
43
digital dream AI artists create stunningly visual digital paintings the lonely algorithm
44
a social media algorithm designed to connect people becomes a priest in way of its own loneliness
45
the ghost in the time machine and the last question ancient AI task to find any answers
46
the ultimate lives question now let me go back to the prompt that we had and we put in for chat gpt
47
if I pop in this same prompt right here create a set of 10 short film ideas concepts that explore
48
the theme of human versus technology set the story in a futuristic smart city that focuses
49
on a protagonist who is a rebellious teenager the film ideas should evoke bittersweet yet hopeful
50
tone include a plot twist in the idea let’s see what Gemini has here okay the glitch in the system
51
a rebellious teen discovers a glitch in the city’s AI okay the digital device a team from a low-income
52
neighborhood excluded from the city’s advanced tech uncovers secret okay I like that the ghost
53
in the machine a team befriends AI last stand humanity okay I think I like think I like this
54
idea the second one the digital divide a team from a low-income neighborhood excluded from the city’s
55
advanced tech uncovers a secret underground network of hackers who are fighting for digital
56
equality all right let me take this right here let’s do that and just like I did with chat gpt
57
I’m going to ask it to develop this further to be a very simple story using this idea where a
58
scene could be produced in just two minutes long with minimal scene changes digital divide a two
59
minute short story okay seen a dimly lit cluttered room Jake a 15 year old sits on a rickety desk
60
is oddly struggling to load a web page frustrated slams his fist on the desk
61
Jake muttering this thing’s slower than a snail okay suddenly notification pops up mysterious
62
message unknown joins the chat room okay we could change the world we already are so this is just a
63
simple scene it’s not a full story chat gpt gave me a better story with this exact prompting it
64
gave me a full round did he feel like beginning middle end what Gemini has done here is give me
65
a scene if you like there’s the opening for something where he meets someone online and they
66
start talking about this but that’s what it’s given me so what I do like about Gemini is the
67
fact that it gives me an image when I’m asking for ideas for things you can develop this further
68
if you were for example it’s great at doing if I go back onto here if I go back onto site here
69
the ideal prompt structure start with a clear objective I’m planning a video on the benefits
70
of digital detox and can you provide clear statistics let me show you this with Gemini
71
it’s very good at pulling facts from places well we connected to google isn’t it the world’s largest
72
search engine so it is really good at that and giving me ideas for things and it’s good at
73
punching out this factual so if I was to compare this previously to chat gpt which we used before
74
it’s great for factual and getting perhaps I’m going to use it in the next section for structures
75
quite a lot also if you’re a visual person you get some visual imagery on here it’s not my favorite
76
for going back and forth I still think the leader in this space is chat gpt for generating ideas and
77
having a back and forth conversation Gemini perhaps for fact checking I quite like the idea
78
of that or getting some visuals initially that’s fine so my personal preference so far for these
79
two perhaps yours isn’t but is chat gpt but we can go on to the next lecture where we’re going
80
to look at the AI tool Claude and you can compare them to these two for generating ideas
— Claude: AI-Powered Video Idea Development —
1
The next tool I want to show you for generating ideas is Claude. So I don’t use Claude all
2
that much because I think you’re going to see it’s very similar to ChatGPT, but you
3
might prefer it to ChatGPT. So I have to show you this. To sign up, simply sign in with
4
your email. I use my Gmail and then I had to verify with my phone number. It had to
5
be over 18, I think it was, or 16, 18 to use this, so I had to verify with a text message.
6
If you want to access this, it’s claude.ai. Go through those setup stages and you’ll have
7
access right here. And it looks very similar, doesn’t it, to ChatGPT and all the other models
8
that we’ve been using. So without further ado, let’s get into this and let me show you
9
the results that we’re going to get from here. If I just actually go over to where we were
10
previously, I could just copy some of the prompts. So exactly the same prompt as we
11
used in the previous ones. If you haven’t seen, especially the ChatGPT lecture, go and
12
see that. You’ll see how we developed this prompt. If I actually bring up the slide for
13
Claude quickly, that there isn’t too many differences from ChatGPT, except the usual.
14
Make sure you effectively prompt, give clear, specific instructions, structure your prompt
15
in a conversational style, probably even more so than ChatGPT. It’s a back and forth conversation
16
with Claude, which is nice. They’re my favorite kind of models. Also, you can ask for improvements
17
or multiple perspectives. Claude’s very good at giving data like that. And if you’re using
18
open ended questions or adding constraints to Claude, you’ll see what the results come
19
out like. They’re great. It will give you a whole wealth of balanced kind of response.
20
So let’s use Claude for this. Create a set of 10 short film ideas, concepts that explore
21
the theme of, and we did human versus technology, set a story in a futuristic smart city and
22
focus on a protagonist who’s a rebellious teenager. The film idea should evoke a bittersweet
23
yet hopeful tone and include a plot twist to the ideas. Also, if I go back over to site,
24
once again, we can come down here and we could go on to Claude and you can see the ways you
25
should be prompting some example prompts. So I won’t go through them and bore you here
26
in your own time. You can go down, but you can be very conversational. Like I’m thinking
27
of making a video about, you see what that was? Open with a goal or a topic is quite
28
important to some chat models prefer that, which I’m not doing. Well, I am doing here.
29
So I create a set of 10 short films. That’s what I want to do. If you wanted to create
30
a two minute film, if you wanted to create a hundred ideas, if you wanted to have a structure
31
at scripts, start with that. So let’s see how this prompt comes out in comparison to
32
where we’ve previously used Gemini and chat GPT. Okay. Digital echoes, 10 short filmed
33
ideas. The last library in a city where all the books have been digitized.16 year old
34
my secretly maintains the last physical library in a grandmother’s basement. That’s, that’s
35
nice. That’s different from anything that we’ve had. Some of the ideas were semi similar
36
between these two, but that was completely different. Zara plans an offline sweet 16
37
disabling all smart devices for one night. Okay. That’s good. The glitch artist, a beautiful
38
art by glitching the city’s holographic advertising displays. Memory merchants in a world where
39
memories can be digitally stored and traded.15 year old Parker runs an underground service
40
helping seniors preserve memories. Okay, nice. The last handwriter yet writing would be a
41
dead art, a signal runners, Kyle leads a park, a parkour group that performs elaborate
42
runs between the city’s rare wireless dead zones. Okay, nice. These are very different
43
from chat GPT and Gemini’s responses. Uh, that empathy hackers garden glitches, uh,
44
the analog club, uh, symphony of silence. Okay. I actually quite like this signal runners
45
idea. So let’s copy this so he knows what we’re talking about. And I could ask actually
46
to improve or explain right here. So let’s actually explain and let’s see what it says
47
right here. I’ll break down the key elements and themes of signal runners in more details.
48
Set in context, the story takes place in a hyperconnected smart city. There’s rare dead
49
zones where signals can’t penetrate. The main characters, Kai is a teenager, likely grew
50
up in certain places. Physical skills represents what he represents the parkour group. Um,
51
the central conflict behind this and the psychological aspects. So it’s really good
52
at breaking this down. So if you were actually a script writer and you’re working in industry,
53
you would probably go back and forward, back and forward. And when you write a script,
54
you want to know everything that’s behind a character or an idea or a reason behind
55
something. And Claude is really good at that. If you want to improve, uh, what we’ve had
56
right here. And if I highlight this section again, this, and before we had explained,
57
which is like getting your background to the story, which is a lot like a script writer
58
would do. I can say improve. It’s going to say, what would you like us to update? So
59
do you remember on the previous one? I said, I want to make this into kind of a two minute
60
scene, um, or film with AI. Let’s give it that prompt there. So this is the prompt that
61
we gave in chat, GPT and Gemini, uh, about how I want to develop this further to be a
62
very simple story using the idea where a scene could be produced just two minutes long with
63
minimal scene changes. Let’s update that and let’s ask it to improve that. Here’s a focus
64
single scene, a two minute story concept based on signal runners title, the last jump setting
65
a single rooftop at dusk, 30 stories high city skyline core scenes. Uh, Kai is dressed
66
in urban running gear, uh, in an earpiece automated voice counts down. She removes that.
67
And then 30 seconds left. She run towards the shed drones begin. Oh, chasing her over
68
this. Okay, nice. And then the end of the scene. So the minimalist approach, it uses
69
a single location, uh, which is good for us both in the real world. If you were filming
70
this or with AI requires minimal dialogue, exactly as we spoke about in the last section
71
and the starting, uh, lecture in this section, focus on visual for, uh, storytelling through
72
movement and lighting. So what will be difficult with AI is if he’s parkour and he’s flipping
73
and jumping, um, she, sorry, then it’s going to be very difficult to be able to do that.
74
So we may want to change this, but this is just how you develop an idea. So you get to
75
know, but you might not want to use that for your story or set yourself a challenge and
76
try and make a video with that much movement with AI, um, delivers a twist through simple
77
visual reveal can be shot in true real time. Um, okay, that’s good. Nice. So this was Claude.
78
You know what I’m going to do actually, just why I have this here, I’m going to say improve
79
and then I’m going to give it that. Then I’m going to say creating this video with AI,
80
and I’m going to see what it says right here. See if it changes any of that because it now
81
knows I’m making it with AI and not filming it in the real world. Very conversational
82
like this. Ah, if you’re creating this with AI technology, let me propose a version that’d
83
be more feasible with the current AI video generation capabilities. Exactly what we’re
84
saying. Focusing on simpler movements and clearer visual elements. So a digital breath
85
setting a single urban rooftop at sunset, clean, minimalist architecture, a two minute
86
scene breakdown is this, uh, okay. Teenage girl in simple dark aesthetic stands on a
87
rooftop behind a cityscape. This reminds me somewhat of the last section where we made
88
a workflow and we did a girl on top of a rooftop, didn’t we? That’s funny. A smart device protects
89
a holographic warning notice dead zone, 23 eliminations scheduled time remaining two
90
minutes, uh, walked deliberately towards a shed. Okay. Yeah. A lot less movement reaches
91
a shed wall, a tense expression, relief to herself. Camera slowly pans across the wall.
92
So this is really good. Probably better. Um, actually, and I’m going to now use this tool.
93
I didn’t use Claude as much. I’d played with him a few times, but I’m a kind of a chat,
94
a chat, GPT person. And you kind of find yourself always leaning towards one model
95
once you get used to it. But look how good Claude was for developing ideas and probably
96
for scripts. Also, uh, had developing ideas. You can go back and forward, even tell it
97
the tools I’m using AI tools and it recognized intelligently. Oh, with the capabilities that
98
we have with AI, let me just change this for you. So it’s a little bit simpler because
99
before I thought you were filming this, uh, as in the real world filming now, no AI here
100
is the change. That’s brilliant. That’s really, really good. So I would suggest going and
101
playing with Claude to develop your ideas is probably now my new favorite tool for using
102
for idea development. And when I come to do my own development at the end of this section,
103
you’ll see me develop the film that we’ve said we’re going to do throughout this course.
104
And I’m going to send a film festival. Um, as I mentioned in the earlier lecture, I’m
105
probably going to utilize Claude quite heavily. Very useful tool at all. Number three, I want
106
to show you for generating ideas. Let’s move on to the next one. And I want to show you co-pilot.
— CoPilot: Developing Video Concepts Seamlessly —
1
The next tool I want to show you is Copilot from Microsoft. So if you go to copilot.microsoft.com,
2
you have to sign up for free. And I just use my Gmail to sign up for this, I think back
3
when I signed up for it. And then you’ll be confronted with a screen that looks something
4
like this. So Copilot is if I bring up the slide here, it’s of course, much the same,
5
make sure you are giving clear results for incorporate key details, you step by step
6
instructions for complex tasks, which we’re not doing hugely complex, and then emphasize
7
any constraints, etc. Like, for example, in the last lecture, where we talked about Claude,
8
constraints of using creating this video with AI, and he gave us a different result.
9
Now Copilot is primarily for it’s great evaluating already existing input is used to be designed
10
with Microsoft Word, Excel, etc. Also, but it’s really good at giving suggestions, organizing
11
content and refining existing ideas, which is not to generate ideas, which we’re going
12
to use it for, but is really good at refining them. So you could use this either in part
13
with other tools to generate the idea and then use Copilot to then organize these or
14
add to an existing refining an existing idea. But we’re going to see what it’s like at actually
15
constructing ideas from the get go. So let’s use the exact same prompt as before, I don’t
16
need to go into too much detail about how we got this. If you haven’t watched it, then
17
watch the chat GPT. The first lecture is how we develop this to then compare these
18
side by side. Great 10 short form ideas. So if I go back the perfect, this is on site
19
for AI video ideas. If I scroll down, I can come to Copilot and specifically see what
20
the ideal prompt is. So input context or document. So we’re not inputting a document, you could
21
be inputting a brief script, an idea template that you have already specific goal, I want
22
10 ideas, and then request based on context. For example, ask Copilot to provide suggestions
23
or ideas based on a specific content you’re working with. So we’re going a little bit
24
step before that. Let’s see how Copilot copes with this. So short thing, here’s a set of
25
short film ideas exploring human versus technology in a futuristic smart city. Okay. Silent Rebellion
26
in a city where every sound and movement is monitored. A teenager discovers old tape recorder
27
and they start spreading unmonitored messages. Okay. Echoes of the past teenagers stumble
28
across an ancient library. That’s funny. That came up a library came up in the last one,
29
didn’t it? Digital ghost protagonist hacks into forbidden virtual reality meets digital
30
avatar, the long lost relative. Oh, that’s quite nice. Heartbeats and hard drives rebellious
31
teenager fights against law mandating brain implants for civilians. The last human touch
32
neon shadows. Okay. I’m going to go through these and see which one I like, and we can develop this.
33
So it’s funny that look at this pulse of the streets. A protagonist used parkour to navigate
34
the city where all movement is regulated by tech. They become a symbol of freedom for citizens. Let
35
me go back here to Claude. And then this is number six signal runners. Kyle leads a parkour
36
group that performs elaborate runs between the city’s wireless dead zones. So wherever the
37
artificial intelligence are pulling these ideas from somewhere online, this one has come up twice
38
now, and it’s very obscure to be parkour. So let’s actually run with this idea to see what it comes
39
up with as I start asking it to do a little bit more. Okay. So with this idea, let’s paste that
40
in. And just as we did before, let’s tell it, develop this further into a very simple story.
41
I won’t say AI, we’re using AI for the idea where a scene could be produced just two minutes long
42
with minimal scene changes. Now, this is what copilot by Microsoft is meant to be for, but
43
giving a kind of elaborating on an existing idea. So we gave it the existing idea. It actually
44
generated itself and we asked it to elaborate on this. So this is really good. Even got the
45
same name. So wherever it’s pulling this from, it’s the same funny that AI is doing that.
46
So pulse off the streets, a futuristic alleyway. Kai, the rebellious teenager, our protagonist,
47
guardian drone and AI monitoring the city’s movements in a sleek neon lit alleyway in the
48
heart of a small city. Tall buildings loom overhead. Okay. The start of the scene, the
49
camera pans down and does this. Kai in a voiceover in a city where every step is counted. Freedom is
50
a race against time and tech. Kai takes a digital watch showing a countdown before the curfew
51
activates. This is kind of similar to the last AI we were using. Isn’t that funny? Okay. And it’s
52
given me a whole scene, but let’s do this. I wanted to once again, and what I loved that we
53
saw here in Claude, if you didn’t watch this, go back and watch it, how I was able to give it
54
specifics that we’re using AI to make this. And it changed the scene specifically to help and
55
aid to be able to create using AI limitations. Let me just add this in here. Okay. With this
56
idea, which I can see, I just spelt idea wrong, missing the last letter. Okay. With this idea,
57
where a scene could be produced in just two minutes long, we are using AI to generate
58
this video. Let’s tell it that and see if it changes its response to the shots it’s going
59
to use just like Claude did. A reminder, Claude said that, ah, if you’re creating this with AI
60
technology, let me propose a version that would be more feasible with current AI generation
61
capabilities. Incredible. So let’s see what this says. Sure. Let’s expand on this idea to produce
62
with AI. Okay. Characters are the same futuristic near open shot. Camera pans are skyline skyscrapers
63
folks in narrow alleyway. Kai is just on this. Okay. Action begins drones warning. So what it’s
64
done here is rather than give me a script, because this is very much a script that you’d be using
65
with traditional TV. What it’s done here quite intelligently for AI is broken this down into
66
like bullet points. I call this like a video structure. And we talk about it in the next
67
section. When we go over scripts and structures, it’s done a nice structure for us in bullet
68
points. And this is what co-pilot is really good at organizing data into a manageable kind of
69
bite-sized format like this. So the opening shot narration, the action begins, the drones warning
70
chase sequence, plot twist, connection and conclusion. What it hasn’t done quite advanced
71
as Claude did is understand fully that I’m creating this with AI. What are AI’s limitations?
72
So let’s now change the scenes to fit with the limitations that we had. I think of all these
73
tools, ChatGPT great up there and Claude maybe just above it on the AI generation. But I really
74
do like Microsoft co-pilot for organizing this. I like to consume data like this in bullet points,
75
one step to the next to the next. So there’s pros and cons for all of them. It’s just another tool
76
I want to show you go and play with it. Maybe it’ll be your tool of choice. Maybe your brain
77
works this way in breaking things down like this. Mine definitely leads towards Claude slightly more,
78
but I just wanted to show you this. So on the next lecture, I’m going to go over perplexity AI,
79
which is the last tool for generating ideas. And then I’m going to generate my own idea
80
for the video inside this course.
— Perplexity: Efficient Idea Generation for Videos —
1
Now, the last tool I’m going to show you before I go on and actually develop our own idea
2
for the project we’re going to do throughout this course is Perplexity AI. Just go to perplexity.ai
3
and you’ll be able to log in. It’s free to use and it’s a great bit of software, great
4
tool to use. Although it actually, if I come over here onto the site, I think I say it
5
quite well here. It’s great at taking on data or giving a data-driven response. Concise
6
response is ideal for finding quick ideas or verifying trends or specific niches. So
7
we’re going to see how it does at giving us ideas, but it should be giving us concise
8
ideas and also using a data-driven approach, what’s trending, what’s popular, what’s been
9
done, what’s good, et cetera, to give us a response here. So if I bring up the slide
10
just quickly, the usual, when prompting with Perplexity AI, use clear language, not colloquialisms,
11
et cetera. But unlike more conversational tools, Perplexity itself are providing specific
12
insights, comparisons, factual summaries. So ask precise questions or for clarification
13
on a topic. I think there’s a great example back on a site here where I show you here
14
with the ideal prompt that I have right here. I want to create a video about zero waste
15
living tips. What are the most important points to cover? This is really kind of an
16
example of where Perplexity excels. You could be saying, I want to create a documentary
17
about the environmental impacts of X, Y, Z, about whatever the topic is that you want
18
and say, what are the topics I should be covering, the main points. And that’s where Perplexity
19
excels. If I was going to generate informational-based content, educational, I was going to document
20
documentary-style videos, then Perplexity is really good at generating that. But like
21
we want to do, we want to compare them side by side for our task right here to generate
22
video ideas. Firstly, actually, I’ve just seen that Perplexity has this question already
23
on site here. What is Perplexity AI? And it had one earlier that was, what’s the differences?
24
I’m just going to actually ask it this question because I quite like it when they say that.
25
So definition, a conversational search engine that answers queries using natural language,
26
predictive terms, utilizing sources on the web. Okay. And here’s all about its background
27
and development. It’s quite open-ended. Yeah, similar to OpenAI’s ChatGPT. Let’s go in and
28
prompt it with the prompt that we developed. If you missed our first lecture on ChatGPT,
29
go back and see the beginning of that. We show how we develop this by generating ideas,
30
for example, getting this theme, getting our protagonist, et cetera, and what the tone
31
should be. So let’s just give it this and see what it comes up with. Here’s a collection
32
of 10 short film ideas that delve into the theme of human versus technology set in a
33
futuristic city. Okay. The Echoes of Tomorrow. In a city where AI manages every aspect of
34
life, 16-year-old Zara discovers a cassette tape. Okay. A glitch in the system. Leo attacks
35
Abby Teen, hacks to the city’s mainframe to expose its corrupt leaders. The Algorithm’s
36
Heart. The Forgotten Garden. The Last Connection. So we don’t have a parkour one, which you
37
saw in the last two tools that we use, which was funny to have those similarities. So let
38
me just read through these and see which one I’d like to develop inside perplexity. I like
39
this number six here, Code of Silence. In a smart city where everyone’s monitored by drones,
40
I can imagine the shots of people walking with drones following them. Rebellious Teen
41
Sam discovers a way to unlock surveillance signals and start organizing secret gatherings.
42
Okay. Let’s copy that. I want to develop that slightly further. And then what I want to
43
add onto the end of here is what we did on the last one. If I go back here, I want this ending.
44
So develop this further to be a very simple story. The idea we create a scene because they’re my
45
limitations. You might be making a full length AI movie for all I know. This is just the limit
46
that I want. The constraints I’ve put on, which we spoke about inside the slide here. Make sure
47
you give the constraints that need to be in here. It’s two minutes long. Minimal scene changes. Very
48
simple. Let’s see what it gives us. So setting a dimly lit rooftop futuristic city surrounded by
49
towering skyscrapers on the rooftop gathering. This is 30 seconds long. A group of diverse teens,
50
including Sam, huddled together under the glow of makeshift lanterns in their scene,
51
illuminated by excitement defiance. So it looks like it’s not giving me a whole story. Setting
52
this up, you may have to do if you were creating this, give yourself a voiceover or text on screen,
53
explaining what the setup of the scene is, etc. Background noise, drones buzzing overhead. These
54
are suddenly a drone descends from the sky, lights up flashing stamps, steps forward, holding
55
unexpected. OK, the unexpected alleyway, new alliance. And then the conclusion, the film ends
56
of a wide shot of the rooftop of the city skyline. All right. What I want to see is the main thing.
57
It doesn’t matter for this example exactly what that is, what it gave us. What I want to see is
58
if it’s intelligent enough, just like we saw with Claude to do this. I want to see I am creating
59
this with a video only. I want to see if it changes the scene to be different here based on the fact
60
that I’m using AI. OK, dimly lit rooftop, futuristic city. So this looks very similar. You see it’s
61
done these bullet point markers just like we saw with copilot copilot did exactly the same right
62
here. A drone suddenly appears above them. OK, they black away. So it hasn’t changed the scene
63
so much. Let me give it a different command, a different prompt right here. Let me say this
64
AI only. So change the scene to make it easy to produce with a video limitations. Let’s see what
65
it does this time. OK, send a small dimly lit room with a few chairs around it. Cluttered gadgets.
66
OK, salmon, three frames a seat around a table. I don’t know if that’s any easier with AI computer
67
screen illuminating their faces. Sam holds up a signal blocker device with blinking lights,
68
leaning and speaking passionately. It’s how good Claude was that it knew that it’s using. Did it
69
have it in the conclusion here? It’s using. Yeah, here we are. Minimal rapid movement. Simple,
70
clear, costuming, complex, without complex patterns. And it also had about here about
71
not using too much dialogue, which perplexity is giving us lip sync dialogue here, which we’d
72
need. Background noise. Suddenly a drone appears. OK, so it’s great at organising just like this,
73
but fairly similar to copilot. I’ve not used perplexity an awful lot, and it is good for
74
organising this in exactly the same way as copilot, much around each other. Probably
75
perhaps when you’re using this, you’d like this more. Or if you’re going to input data,
76
for example, if you want to make an educational video, informational video around data you were
77
inputting here, then input that and say make a story or an info video around this data.
78
That’s probably more what perplexity is used for or would be better at. But for our task here,
79
creating a video with entertainment skew in mind, not so much. Still, I think winning the race here
80
is Claude. So in the next lecture, I’m actually going to do that. I’m going to use probably
81
chat GPT and Claude and I’m going to develop my own. You’re going to see how I develop
82
myself an idea for a video that we’re going to then continue throughout this course,
83
all the way through from ideas, scripting, making the visuals, audio and then making the video for
84
all the way through to put that into film festival. So let’s go on to the next lecture
85
and let’s produce the idea.
— Course Project Kickoff: Generating Ideas with ChatGPT and Claude —
1
So now finishing off this section generating ideas, I’m going to actually generate the
2
idea that I’m going to use for this course project. So if you’re following along and
3
want to see one how I do it and also follow along with the project that we’ll be making
4
throughout the course, this is the very first step of us creating this idea. So I’m going
5
to use my two favorite here. I wouldn’t ever just use one AI tool, but you can chat GPT
6
and Claude. And depending on if you needed an info video, perhaps you’d use some of the
7
others as we’ve explained in the last few lectures. But I’m creating fiction. I want
8
to create a story. So I like these two tools. I’m going to use them side by side and generate
9
some ideas. So I have you probably got an idea. If you had none, then I’ve shown you
10
in that very first lecture, a chat GPT, especially generate some ideas. But I really like to
11
create something around the idea of conflict, be that actual conflict like war or something
12
along that I want to see what they bring out here. So I’m going to say generate 10 ideas
13
for a short film around conflict slash war, because I don’t mean conflict between two
14
people arguing over cutting the tree in the garden next door. I want it let it know so
15
conflict war, current or historical, make the results simple and brief. At this point,
16
I don’t want it to start generating me whole ideas and structures and, and scripts and
17
things. And if I go back on to, if I go back into here, this is where this is really helpful.
18
I can come down and chat GPT, I can see, hey, what’s the ideal prompt here? So I want
19
to create and explain a video technology, can you suggest a title for this? So I’ve
20
said, I want to basically generate a film. So I can say, I want to create a short film
21
using AI video. So I want it to generate 10 ideas for this short film around conflict,
22
war, current or historical, make the results simple and brief. Okay, let’s copy that. And
23
I’m going to just paste that in here. While I’m waiting for the result. Let me go over
24
to Claude, I’m going to paste that in. But before I do, I’m just going to go back and
25
check on our site to see what it says about Claude and the ideal prompt for that. So developing,
26
I’m thinking of making a video about this. Can you suggest an outline? Okay, pretty much
27
the same language being used for chat GPT. And for Claude. So let’s do this. What we
28
do know is Claude is more intelligent at knowing the limitations of AI in this example. So
29
I’ll be interested to see what it comes up with. So let’s put that in there and see what
30
it says. Whilst I’m waiting, I’m going back over to chat GPT to see what the 10 results
31
are the last letter. So a soldier during World War Two writes a heartfelt letter to his family
32
before going to battle. The film cuts between the scenes of the battlefield and the family
33
reading the letter. Okay, nice. The truce during a brutal modern day urban conflict
34
two soldiers from opposing sides find themselves trapped together in a destroyed building.
35
Silent heroes and a small village during historic occupation villages secretly help wounded
36
soldiers hide them from occupying forces. And the white flag a young soldier ordered
37
to storm a village held by the enemy but finds himself face to face with a child holding
38
a white flag. I like this idea. I don’t like any of these specifically exactly as they
39
are and I want to develop them. But just this spoke to me here a child holding a white flag
40
child. When we’re talking about conflict and war, we’re often thinking about the soldiers
41
here. But perhaps having this idea from a child’s point of view might be quite nice.
42
I might ask you to develop some ideas on that. And this is why generating these ideas
43
are good because it might not be exactly what it is. But one bit might speak out to you
44
that you hadn’t thought of before until you generated these ideas. And then you can generate
45
based on that idea that it gave you from an idea. Let me just go and compare this initial
46
prompts right here with what Claude’s gave us. Okay, so AI short form concept. So the
47
last letter. Wait a minute. Let’s go back here. Let’s have a look at this. The last
48
letter. Okay, let’s have a look. It finds a letter. And yeah, much, much the same. Two
49
sides of war split screen showing two children on the opposite sides of a border wall. Okay,
50
this is speaking to me a little bit more mirroring each other’s movements without knowing it.
51
They both draw peace symbols on their respective sides. The AI friendly elements for this.
52
This is great that it does this. AI friendly elements, slow movements, still poses, static
53
war, simple drawing animations. Nice. The empty classroom. The classroom slowly transformed
54
peacetime to wartime, deaths become askew and a white flag. This is much like we saw
55
in the other one here. So I’m liking Claude’s responses and understanding of what’s needed
56
for AI far better than any other tool here. So Claude, I’m awarding you the winner. And
57
I want to develop further with you. So I like this idea right here. Okay, I’m going
58
to ask you to develop this. So I want to develop this idea, but change it slightly. So I love
59
the idea of these two children coming from either sides of a wall. I don’t want to have
60
an actual physical wall as if it’s like the Berlin Wall or something in there because
61
there was a conflict like that where the Berlin Wall was up. What I want to have changed this
62
to two children on the opposite sides of the world. I like the idea of doing something
63
in time when we saw, I think this last one sped me here, this World War Two. Let’s do
64
a World War Two. Let’s do this USA and Japan. So that’s just an idea. I’m coming out here.
65
I’m just feeding it information and seeing what Claude comes out with. So let’s give
66
it this and see what it’s going to generate. Let me develop more detail. Two minute AI
67
friendly concept focusing on parallel movements in two children during World War Two titles,
68
children split screen throughout. Okay, I may or might not do that. Left side shows
69
an American suburban home backyard. Right side shows traditional Japanese home and garden.
70
Both scenes occur at their respective dawn symbolizing the same moment in time. Detailed
71
scene breakdown left. An American girl, eight in simple 1940s dress sits by her window right.
72
A Japanese boy, eight in simple traditional clothing six by his shoji screen. If that’s
73
how you pronounce that apologies. Both looking up their respective skies. That’s a really
74
nice shot. You can imagine that, can’t you? The girl starts drawing her father in military
75
uniform. The boy starts drawing his father in military uniform. Both children pause to
76
look up occasionally. I think I know what’s going to happen right here. Both children
77
add to their drawings. These draw their mother themselves. Okay, the expressions mirror each
78
other hopeful but sad. Both children pin their drawings to their wall. Camera slowly pulls
79
back to show left an American photo of the father in uniform, right Japanese, a photo
80
of the father in uniform. The final shot dawn sunrising split between the two screens. Okay,
81
so it’s showing these two children of war here, side by side happening at the same time
82
drawing pictures of their fathers who are in military draws back to show they’ve both
83
died. This is a really nice concept and fairly simple and could be very visually stunning
84
where you’re using split screen throughout. So I really like this idea. What I want to
85
do is develop that further. In the next stage, we’re going to develop a script and a structure
86
based on this. So I like this. What I might do is just have a little research myself around
87
World War Two, USA and and Japan. And I could actually ask chat GPT to do this for me. So
88
give me some bullet points about the World War Two, and specifically between Japan and USA. And
89
I just want to see I just just to take night something in me. Pearl Harbor attack us declares
90
war Battle of Midway island hopping strategy. Okay, okay, okay, okay. Let me go back up here.
91
This reached out to me Pearl Harbor attack. And then of course, if I keep coming down,
92
there were kamikaze attacks, the Manhattan Project, Hiroshima, and Nagasaki bombings.
93
Yeah, okay. And then Japan surrenders. These are each side of the war here. Here’s the end.
94
And here’s the start here. Okay, something is reaching out to me here. Alright, so I want to
95
develop this idea. I want to develop this idea to children, USA and Japan World War Two from the
96
point of view of the children, the USA set at Pearl Harbor, Japanese at the
97
bombing, both showing they lost their fathers, which is what came from the last story,
98
they both lost their fathers. I’ve taken that to children, you see how we’re developing the
99
idea right here. And then chat GPT gave me some of those ideas. So let me just see what it has
100
around here. I want to develop this idea to children, USA, Japan, World War Two from the
101
point of view of the children, the USA set at Pearl Harbor, and Japan’s at the Hiroshima bombings,
102
both showing they lost their fathers. Let’s see what Claude does. And because it’s conversational
103
chat GPT, it knows what we’re after right here. I’m trying to make an AI film. And it already knows
104
that right here. So let’s see, let me develop a sensitive focus two minute concept that captures
105
these profound moments through children’s eyes title, when the skies change split screen narrative
106
showing two parallel moments, Pearl Harbor, December 7, 1941, and Hiroshima, August 6, 1945,
107
two minute scenes breakdown, peaceful morning left screen Pearl Harbor, American girl in simple
108
nightgown sitting at a breakfast table father and neighbor uniform, kiss her goodbye, warm Hawaiian
109
morning light through the window, Navy ships visible in the distance of the harbor. Okay,
110
right screen Hiroshima, Japanese boy eight in a simple yukata having breakfast with father,
111
father in civilian clothes preparing for work, soft morning light spilling through the screen,
112
city visible through the window, girl drawing pictures by the window, peaceful morning,
113
suddenly her head lifts hearing planes, paper and crayons stand still, still in hand. So right,
114
the boy tending to a small garden clear blue sky above noted unusual bright flashes,
115
garden tools still in hand. This is getting intense. I really like this.
116
Left the room darkens with shadows of planes. Picture floats from her hand father’s hat from
117
earlier seen visible on the table girls eyes widening in fear. And on the right the intense
118
light fills the screen boys garden paper lanterns glows briefly father’s glasses from earlier seen
119
on the porch the boy shields his eyes right and left merge into a single frame both children
120
sitting at similar poses each holding their father’s items girl with the Navy hat the boy
121
with the father’s glasses same dawn sky above them all they both look up at the same sky.
122
Okay, great. Now we’re getting somewhere with the scene. I like this. We’ve got a scene from
123
Pearl Harbor. We can imagine the dates coming on screen and that juxtaposition between seeing
124
the girl at Pearl Harbor and the boy in Japan. I’m not sure I’d use split screen I might just
125
jump from one scene to the next. Father goes off to work in Japan civilian and the Navy military
126
father leaving for work and they’re both going to come up against we know the inevitable is happening
127
Pearl Harbor attack there in Hawaii and Hiroshima bombing. You may want to show some scenes of
128
planes coming over and bombs or just noises you could wear that’s quite difficult to generate AI
129
you could have the girl’s face and the noise of this bombing you could have the boy’s face and the
130
noise of that bombing although I know AI would do a great kind of atom bomb kind of look depending on
131
how much you wanted to show here so points for sensitivity there’s no explicit depiction of
132
violence or destruction focus on the emotion the emotional human experience show impact through
133
subtle changes emphasize a shared emotion use children’s perspective to main to maintain
134
innocence I really like this really like it we’d like to expand an aspect of this concept I can
135
still detail specific shots suggest alternatives yes expand on this I will be using AI video
136
generation for this project and let’s see if it changes anything given I’ve given it the limitations
137
of AI here last time we know from earlier if you haven’t seen it go and watch the Claude
138
lecture that I did earlier it did a really good job understanding if I can find it ah if you’re
139
creating this for AI technology let me propose a version that would be more feasible with current
140
AI generation capabilities really intelligent stuff really nice so let’s wait for Claude to
141
generate this and we’ll see if it makes any changes based on the fact that I’m using AI
142
okay Claude has begun I’ll break this down the very specific AI friendly shots focusing on
143
elements that current AI video generation can handle well so good Claude is so good for this
144
title when the sky changed a detailed shot by shot breakdown optimized for AI generation
145
so the opening sequence got a close-up shot shot American girl’s hands coloring with crayons
146
simple 1940s wooden table surface warm morning light drawing shows her family navy ships visible
147
through the window behind her Hiroshima side close-up shot Japanese boy hands folding origami
148
traditional low table morning sunlight through the paper screen origami cranes scattered on the table
149
city rooftops visible through the window so it has broken this down shot by shot I could pretty
150
much when we come later to start making our mood board or the shots here I could almost
151
copy this like this and ask if we use I don’t know mid journey like I’ve shown you earlier
152
could start copying these in to start making my storyboard I could use this one two three
153
four five how many are on here seven or so shots that I have on here to be able to generate my
154
storyboard Claude’s so good at that so what I have here is my idea now I’m going to develop this
155
kind of side by side or happening simultaneously this World War II Japan Pearl Harbor kind of
156
point of view of the children and loss this kind of sad story that’s what I’m going to develop here
157
for our project so in the next section I’m going to develop the script for that and the structure
158
for the video and we can start working on that using chat GPT and some other tools that I’m
159
going to show you that are specifically good for generating scripts I’ll see you over there
160
but next I have a quick task for you
— Task: Building Your AI Video Idea —
1
So, to finish off this section, AI video ideas, now you’ve seen five tools and you saw my
2
favorite too, and especially Claude, I think it’s amazing. Using all the tools you’d like,
3
please develop five ideas for a project you’d like to work on. You keep developing this
4
and narrow it down. You saw me, I could have just gone with that very first idea that I
5
went with, but I said, okay, I like that tiny bit of that when I said about children
6
and conflict and war. Okay. And then I liked a bit about world war two and I combined them
7
together and I kept working and working on it. And use the different tools to compare
8
and get the best ideas. If you use all five of these tools, you will quickly see which
9
one is your favorite and which one works the best for your projects that you want. If you’re
10
using, if you’re creating, sorry, like educational style information content, then you might
11
want a different tool than say, uh, myself when I was creating a fictional piece and
12
I was using Claude, go ahead and get yourself at least five ideas. You may, from those five,
13
have one you want to progress with onto the next stage, but at least have five and then
14
sit on them, sleep at it, develop it. And by the time we get into the next section for
15
scripting, you’ll have that idea that we could start generating a script or structure for
16
this video from which we can then make mood board, storyboard images, and then the video
17
just a few steps away. So please go ahead and do that for the task and I’ll see you
18
in the next section.
— AI-Powered Scriptwriting: Tools to Bring Your Idea to Life —
1
In this section about scripting we are going
2
to go over five tools, five possible AI
3
tools that you could want to use for
4
scripting.
5
Now you’re going to quickly develop which is
6
your favorite and I’m going to give you
7
five examples here.
8
You may use more than one, you’ll see
9
me use them.
10
I predominantly use ChatGPT and you probably at
11
the minute will maybe Scribbler, Chatsonic, TexCortez, you’ll
12
see.
13
We’re going to go over these five tools,
14
ChatGPT, Gemini, which we’ve both looked at in
15
the previous section where we talked about generating
16
ideas, but now some specific AI tools for
17
scripting, that’s Scribbler, Chatsonic and TexCortez.
18
Now if you go onto site here, this
19
is aivideo.school, AI video scripts.
20
I’ve got a page for you here which
21
breaks down everything which you can go through
22
in your own time, but I will cover
23
these in these lectures.
24
Apart from it’s kind of important for you
25
to understand the structure of a script.
26
You can ask AI to generate a script,
27
but if you don’t know what one should
28
look like, how it should be formatted, everything
29
from the actual format to the structure of
30
it, that’s the three-act structure, classic three
31
-act structure, then it’s difficult to know what
32
you’re looking at.
33
So on site here, and you can also
34
download them, is the BBC, that’s the British
35
Broadcasting Corporation, it’s the TV screenplay format, the
36
film format, and then an overview of what
37
a three-act structure is.
38
So you can download these either from here
39
or from the course itself and you can
40
get familiar with that.
41
I’ve also got a breakdown pretty much of
42
what a three-act structure is, which some
43
of you will be familiar with and some
44
won’t.
45
It’s basically a setup, an inciting instance, something
46
happens, there’s a confrontation, and then there’s the
47
resolution.
48
So setup, something happens, they’ve got to overcome
49
it, back to a resolution, equilibrium, disequilibrium, equilibrium.
50
That’s what we kind of say with a
51
three-act structure there.
52
But with this course specifically, we’re going to
53
be looking at AI tools and what to
54
use, what best to use and best practices
55
for generating scripts.
56
But just so you’re familiar, there’s a background
57
on that.
58
So if you scroll down, much like the
59
other sections right here, I have these five
60
tools.
61
In the dropdown, you can go in and
62
it can show you specifically what you should
63
be doing to generate script generation with ChatGPT.
64
So there’s example prompts in here, building story
65
structures, an example prompt for that, dialogue, scenes,
66
refining characters and backgrounds, drafting full scripts for
67
small scenes, and developing nice ideas.
68
Ideal prompts for ChatGPT, we know this from
69
previous sections.
70
And then kind of example, more examples and
71
differences between the other platforms that we’re talking
72
about and where you might want to use
73
this as opposed to something else.
74
Or you might want to use, I don’t
75
know, for example, Gemini, if you were doing
76
something more factual based, as opposed to entertainment
77
based.
78
That could be a possibility.
79
So I won’t go over these in depth.
80
Generally speaking, we know that ChatGPT, the strengths
81
are its conversational understanding, narrative structure, and creation
82
in the ideas.
83
The best prompting strategy, of course, as we
84
know, is to be clear, concise, include anything
85
about the characters, dialogue, et cetera, et cetera,
86
that we spoke about before when we were
87
talking about the kind of prompting that ChatGPT
88
likes and the going back and forth that
89
you saw in the previous section.
90
Now, Scribble is a good one.
91
Just in a couple of sentences for you,
92
it’s specialized for screenwriting, scene breakdown, and script
93
formatting.
94
So the best prompting way is to provide
95
a structured narrative outline, specify scenes, beats, character
96
movements, leverage the formatting features to quickly draft
97
a properly formatted script for yourself.
98
So it’s perfect, perfect for that.
99
Now, Chatsonic is ideal for scripting dialogue, especially
100
you’re going to see that.
101
And it’s also good at knowing trending topics
102
and also in the way it dictates, in
103
the produces dialogue rather, you’ll see that it’s
104
kind of a little bit more in touch
105
with almost real world, which is quite scary.
106
You’ll see us when we use that.
107
TexCortez is really good at long form content.
108
You’ll see that if you were doing a
109
feature length or continual series, then sometimes TexCortez
110
is better.
111
And then Gemini, we know is multimodal.
112
You can get imagery or whether it quite
113
likes to display things in that kind of
114
graphical way quite often, which I quite like
115
to receive.
116
And it’d be ideal if you were making
117
documentary topics or factual educational information videos, you
118
might like to use this.
119
But we are going to generate a script
120
using all of these, see the differences.
121
And then the end of this section, I’m
122
going to generate the script for the project
123
that we were working on.
124
If you remember, I’m making a film throughout
125
this course, and then we’re going to send
126
that to Film Festival at the end, actually.
127
And we generated this idea based around Pearl
128
Harbor, Japan and Pearl Harbor simultaneously from the
129
point of view of children.
130
So at the end of this section, I’m
131
going to be using one of these tools,
132
possibly two, but probably one to develop that
133
script.
134
And you can see me in real time,
135
develop my script at the end that we’re
136
going to be using for the course.
137
But I will be going over each one
138
of these lecture at a time.
139
The first few lectures are going to be
140
all ChatGPT.
141
And then you can use the same things
142
for each of these.
143
I don’t need to show you each section.
144
So I’ll give you just a quick overview.
145
Then I’ll show you the prompts for scripts
146
and best practices, script versus structure.
147
There’s a different how to actually set up
148
a voice or a tone as if you’ve
149
got your own personal writer in ChatGPT working
150
with you, and then refining scripts with ChatGPT.
151
And you can use much of those similar
152
tactics inside all of these.
153
But I’ll give you an overview of these
154
after that, right before we make our script
155
for this course.
156
Okay, let’s get on with it.
157
Let’s move on to the next section.
158
Let’s talk about ChatGPT and start generating some
159
scripts using it.
— ChatGPT: Scriptwriting Made Easy – Prompts —
1
We’re going to be utilizing ChatGPT to generate
2
a script.
3
This is the first video really generating a
4
script.
5
I’m going to show you the prompt and
6
the way I’m going to use it specifically
7
for ChatGPT, but then how you would then
8
tweak that to change that for the other
9
platforms which we’ll see later.
10
But this is the first of four lectures
11
with ChatGPT specifically.
12
First now, I’m going to show you prompting
13
for scripts, then script versus structure, how to
14
set a tone, that’s an actual personal assistant
15
in ChatGPT, and then refining your script before
16
we check out the comparisons between Scribbler, Chatsonic,
17
TexCortez, and Gemini.
18
Okay, so just like from the last section,
19
if you haven’t watched, then you won’t understand
20
how we developed an idea, but we developed
21
ideas in the last section.
22
So we’re assuming you have your idea.
23
If you followed your homework from the last
24
section, the last lecture, the task, then you
25
will have your idea and you’ll be able
26
to come into ChatGPT and follow along with
27
what we’re doing now.
28
So if I scroll down on the site
29
here, once again, this is AI-video-scripts
30
you have access to as a student.
31
If I scroll down to ChatGPT, I can
32
see all the example prompts for different things.
33
These are getting an idea, building a story
34
structure, creating dialogue, et cetera, refining characters.
35
It will depend whether you need this.
36
If you are creating a silent movie without
37
any dialogue, if you’re creating an advert, if
38
you’re creating a promotional video, educational, it’s going
39
to be very different depending on what it
40
is you are creating.
41
So what I’m going to do is scroll
42
down to this bit.
43
I love this and it’s on each section.
44
Ideal prompt structure for ChatGPT.
45
It’s the start of a clear objective, provide
46
any context details, request specifics, tone, and style.
47
So I’m going to just paste in here.
48
This is my prompt that I have here.
49
And I’m going to break this down and
50
you can follow along.
51
So I have this act as an experienced
52
script writer.
53
It’s something I quite like to do.
54
I think I found it from here.
55
I’m going to show you this blog that
56
I quite like to read, prompt advice.
57
And it’s got all the advice here with
58
generating some ideas or some further reading if
59
you want to.
60
And they always like to start with act
61
as an experienced script writer.
62
And I’ve seen people say this before, act
63
as a script writer.
64
You are a script writer.
65
Even you are Quentin Tarantino or something, which
66
we can get to later for that.
67
But I like to start with that.
68
There’s no harm in it.
69
And if I took this away, there’ll be
70
very little difference.
71
But just so they know you are a
72
script writer.
73
Okay.
74
So the first point I need to do,
75
start with a clear objective, specify the purpose
76
or genre of the video.
77
So I’m saying generate a two minute script,
78
a script for a two minute video, sorry.
79
Clear instruction, the first thing I do.
80
I also want to tell it about any
81
limitations that I have.
82
So to produce the video, I will be
83
using only AI video tools to generate the
84
visuals.
85
So keep AI video generation limitations in mind
86
when producing scenes and characters.
87
Keep them simple enough.
88
Okay.
89
And the theme, the theme of my story
90
is lost love.
91
So this is what I developed in the
92
last section and you would have done.
93
And this is this is my development, my
94
idea for a video.
95
Okay.
96
So provide context and details.
97
Mention any specific elements or themes that you
98
want to include.
99
So I have my main and sole character
100
is a young boy.
101
I’m not decided on age exactly or ethnicity
102
or location in the world.
103
I’m going to leave that up to chat
104
GPT here.
105
Maybe they’ll inspire me.
106
I could say in the USA, he’s Caucasian,
107
black, Latino, living in California, New York, in
108
the city, in the countryside.
109
I could get very specific here, but I
110
want to leave it.
111
Sometimes I quite like to leave it up
112
to chat GPT or whichever software you’re using
113
to see what they come up with.
114
So he has lost his dog, his best
115
friend.
116
This is this lost love.
117
He has lost his dog, his best friend.
118
So the next point was request specific.
119
Ask for any key points, outline scripts or
120
dialogue.
121
So I don’t have any dialogue.
122
I’ll show you here.
123
But what I do ask you to do,
124
and we mentioned in the last section, you
125
can go ahead and read all about the
126
three act structure and download that right here
127
on site if you wanted to, is I
128
want to make sure it adheres to ensure
129
the script follows the three act structure and
130
has a clear resolution.
131
That means the scene, although only two minutes,
132
will set up, will show a set up,
133
an equilibrium, a state of equilibrium of whatever’s
134
happened.
135
Then there’s the issue that’s going to be
136
overcome and there’s going to be a resolution
137
at the end.
138
I also say there should be no dialogue,
139
but if needed, there can be a narrator’s
140
voiceover.
141
Because we’re using AI tools, I don’t want
142
to lip sync.
143
I don’t like the look of it, but
144
I don’t mind some narrator’s voice in some
145
tools.
146
We’ll see later, 11 Labs or Filmora do
147
some great voiceover tools.
148
So I don’t mind having a narrator’s voice,
149
leaving that once again up to chat GPT.
150
And then lastly, tone and style indicate the
151
desired tone.
152
Casual, formal, humorous.
153
I want to say my tone is calm,
154
romantic, and heartfelt.
155
Now just to go back a stage with
156
this point here, point number three, any specifics,
157
outline scripts or dialogue.
158
If you already had something in mind already
159
with your idea, sometimes you saw when we
160
generated ideas that it went into a little
161
bit more detail than just producing an idea.
162
And that’s the point in which you put
163
them.
164
For example, if you’re generating ideas and it
165
says that there was a cliffhanger at the
166
end, you’d put it here.
167
Or there was specific dialogue scene, some characteristic
168
that the character had or did or something
169
like that.
170
If they have a stutter, they’re missing a
171
limb, they are a war veteran, whatever it
172
is, put that in the middle two points
173
here, point two and point three for chat
174
GPT.
175
Let’s run this and let’s see the results.
176
Okay, I’m just going to read through a
177
little bit of this and see what it
178
has come up with.
179
So title, a boy and his best friend.
180
Duration, two minutes.
181
It’s heartfelt, emotional.
182
Theme is lost love.
183
The character is a young boy aged six
184
to 10 who has lost his dog.
185
So yes, that’s true.
186
Okay, act one, 35 seconds.
187
Scene one, establishing the bond.
188
A video opens with a warm, soft morning
189
light filtering through the trees in a quiet
190
park.
191
A young boy around seven years old runs
192
joyfully through the field with his dog, a
193
small brown and white terrier.
194
The boy laughs and throws a stick and
195
the dog runs after it with boundless energy.
196
Gentle, soft piano music underscores the scene.
197
Nice.
198
So visual focus, AI generated visuals of a
199
field, a small dog playing and a happy
200
boy.
201
I could quite easily, and you saw two
202
sections ago when we touched and played with
203
some AI image and video generation in the
204
workflow section, I could quite easily do this.
205
It’s almost going to be like a montage
206
of shots of this.
207
Shots of the boy laughing, running, sharing moments
208
of play with the dog.
209
No problem.
210
And you saw me on 11 Labs, I
211
could easily generate the sound effect for a
212
young boy laughing on there.
213
No problem.
214
Cut to a shot of the boy resting
215
on the grass and the dog nuzzling close
216
to him.
217
The boy pats the dog’s head and they
218
share a quiet, content moment together.
219
So you don’t need to show, that’s what
220
it would be ideal if you could, you
221
were filming that yourself, but this could quite
222
easily with AI be a shot of the
223
boy.
224
The music would calm down.
225
We could fade out and fade in.
226
Boy’s laying on the grass and there’s a
227
dog there and then fade to black as
228
if the boys fell asleep.
229
Narrator, calm, warm voice.
230
They’re inseparable.
231
Two best friends finding joy in the simplest
232
moments of life.
233
Act two, this is the confrontation.
234
It’s the lost.
235
The visuals shift to a slightly overcast day.
236
The boy is seen wandering through the same
237
field, calling out and searching frantically.
238
The stick from earlier scene lies abandoned, half
239
buried in the mud.
240
The boy runs around looking with growing worry,
241
but the dog is nowhere to be found.
242
So again, I could do an empty field.
243
Once I’ve generated these first images, I can
244
generate the same thing, change the clothing for
245
the boy and have his different expressions.
246
Not a problem with AI.
247
Cut to a scene of the boy sitting
248
alone on a bench.
249
I would mimic that with that first scene
250
where he fell asleep on the grass, sat
251
there or something maybe.
252
His feet swinging in the air.
253
He holds his dog’s collar in his small
254
hand, staring down at it.
255
The rain softly begins to fall.
256
The rain reflects his feelings of emptiness and
257
loss.
258
So this is Eva right now.
259
He’s holding his dog’s collar.
260
The dog doesn’t have a collar.
261
Is the dog died or is the dog
262
missing?
263
Let’s see, narrator.
264
But one day life changed.
265
His best friend was gone, leaving only memories
266
in the places they once roamed.
267
Act three, finding peace.
268
The visuals brighten slightly.
269
The boy returns to the field on a
270
sunny day, still holding the collar.
271
He stands in a spot where he and
272
his dog used to play, looking around as
273
if expecting to see his furry friend running
274
towards him.
275
But instead of despair, a small smile appears
276
on his face as if the senses the
277
dog’s spirit is still with him.
278
AI generated visuals of the boy standing, sunlit,
279
holding the collar.
280
Gentle breeze brushes.
281
Okay, yep.
282
Cut to a final shot of the boy
283
walking away from the field, holding the collar
284
loosely in his hand.
285
He looks back once with a smile.
286
Hints at a bittersweet acceptance.
287
The screen fades to black.
288
In time, he learned that love doesn’t always
289
stay, but it never truly leaves.
290
Okay, so this is actually, I’m going to
291
read the summary.
292
The video tells a story of a young
293
boy who loses his beloved dog, his best
294
friend, the bond they share and shown through
295
visuals, a playful moment shared of happiness.
296
When the dog despairs, the boy’s left heartbroken,
297
struggling to understand the loss.
298
In the end, he finds solace in the
299
memories and love that remain.
300
Learning that love may not always stay, but
301
it truly never leaves.
302
Okay.
303
Yeah.
304
So this is a complete script.
305
Act one, two, three.
306
They set up that, uh, the boy and
307
his, his dog, the dog’s gone missing.
308
What’s happened?
309
Oh, he must’ve died.
310
And the boy has accepted that.
311
So this is a nice little script because
312
I gave it the limitations of only two
313
minutes.
314
This is all it can pretty much do
315
in two minutes.
316
If I was to give it a 10
317
minute, uh, or five minute script, even I
318
guarantee there would be more of a rounded
319
arc on there.
320
Like I would like to develop this.
321
And there seems to be a bit of
322
a jump here between, uh, scene two when
323
the boys, um, kind of still sad in
324
the field, holding his dog’s collar.
325
And then the resolution that perhaps there’s going
326
to be something here.
327
Like, does he get a new dog and
328
puts the collar on the dog?
329
Does he find, um, a picture, a ball,
330
a toy with the dog?
331
There’s something needed to connect these.
332
And that’s something we can do when we
333
talk about refining the script, which is one
334
of the last lectures in this section.
335
So if you want to go along and
336
play with, this was only, obviously I could
337
tell it, okay, I’m going to, I’m going
338
to do it right now.
339
Uh, I could tell it to make that
340
a full length script.
341
Okay.
342
So I’m going to do as being as
343
I write a script for a, uh, well,
344
let’s call it a 30 minute video.
345
Um, let’s not tell it about AI for
346
now and let’s generate that.
347
Um, and I’ve also actually said there’s no
348
dialogue, but there can be a narrator’s voiceover.
349
That’s quite a bit for a 30 minute
350
film.
351
Um, maybe I should have, I might stop
352
that and run it with dialogue and you’ll
353
see what it can do if you were
354
making an actual script.
355
Okay.
356
So now generating this script, I haven’t given
357
it the limitations of time and, uh, it
358
says, okay, act one done this act two,
359
this is still giving me a narrator because
360
remembered before I can tell it to, uh,
361
it can have dialogue now at three, the
362
boys in several days searching everywhere over the
363
fields.
364
Yeah.
365
Way more obviously time.
366
The film shows the boy’s face marked by
367
quiet determination memories and letting go as time
368
passed.
369
The boy visits the places, um, act three
370
resolution.
371
Okay.
372
The sunset, he holds buddy’s collar is faded.
373
The Sunday’s below the camera boy pacing the
374
collar around the Oak tree.
375
So he takes, yeah.
376
So he has a mound under an old
377
Oak tree.
378
He takes a deep breath and looks at
379
the field, his expression of acceptance and peace.
380
So he’s gone to where the dog was
381
buried obviously.
382
And then moving forward looks like here he
383
has a, the camera lingers on the hill,
384
the old Oak tree as the boy’s playing
385
in there.
386
He also carried the Mary’s best friend with
387
him, like a gentle breeze that always whispered.
388
So you can see if you had given
389
it more time, uh, this is 10 minutes.
390
There’s a way more rounded story here.
391
And it feels like a Pixar mini movie
392
that you might get at the beginning of
393
the Pixar movies that you might’ve seen in
394
the cinema.
395
Two minutes is perhaps too much to do
396
a three act structure, um, whole scene, whole
397
three X structure of movie.
398
Sorry.
399
Um, but it’s possible and we can develop
400
that.
401
So that was chat GPT, um, with scripting.
402
Now we’ve jumped in there and got yourself
403
a script.
404
What I like to do either before or
405
after, it depends if you like that script,
406
you can make a structure from it, or
407
you can have a structure and then make
408
a script from it.
409
I’m going to do it this way around,
410
but you could do the next lecture.
411
I’m going to show you before you do
412
this completely up to you.
413
We’re going to talk about making video structures.
— Scripts vs. Structures: Using ChatGPT Effectively —
1
Continuing on now with scripting and ChatGPT, I want to let you know that a lot of you won’t
2
actually need a script and do not obsess with that word script and what’s needed. A lot
3
of videos do not need a script. You don’t need it to say, this person said that, and
4
the narrator said this, this scene looks like this. There are many examples where videos
5
won’t need this. If I bring up the slide right here, I’m going to show you the difference
6
between generating a script and a structure. A video script, as we’ve shown you in the
7
last lecture, is a detailed and fully fleshed out document that includes dialogue, character
8
actions, scenes, descriptions. Okay, great. They’re going to be needed if you’re making
9
a short film for a festival or something. But if you are doing, I don’t know, even a
10
short two minute AI funny, if you’re making a comedy piece, if you’re doing a documentary
11
or an information or educational video, basically anything that’s not almost a fictional story,
12
then you might want to align yourself with producing structures as opposed to scripts.
13
And I’m going to explain that. A structure provides an outline or framework of the video’s
14
key elements, including perhaps plot points, major scenes, transitions, et cetera, or clear
15
points in like a blueprint format of what’s needed to go from one stage to the next, to
16
the next, but in a point format, as opposed to as much detail and fleshed out document,
17
if you like, that a script is. Also, if there’s flexibility needed with your project, then
18
a structure is definitely better. A script is the final stage here. Many of you may do
19
a structure first, and then once you’ve generated a structure, then make a script from the structure.
20
Sometimes the other way around, like we’ve shown, I’ve done the last lecture first and
21
then this one, but quite often this way around, you might structure this first, or you only
22
need one or the other. So let me show you here. I’m going to just paste in my prompt
23
right here, and I’m going to go through that with you. Okay. So just like on here, if you
24
go through, you can see I’ve got here, here’s a prompt if you want to generate ideas, building
25
structures and things, but still sticking to these points here that we spoke about in
26
the last lecture. Let me go in here and show you the structure that I’ve got for a prompt
27
that you can copy for getting myself a video structure. So I tell it straight away. Once
28
again, first point, start with a clear objective. Generate a script structure, not a script
29
– I double-ended to make sure of this – for a video about. It’s a five-minute long documentary.
30
Once again, I’m providing clear objectives and context details coming up. The topic of
31
the documentary is the growth of AI history. Break the video down into five sections. If
32
I’ve got a nice 10-minute video, then two minutes structure. So five-minute video, sorry,
33
one minute per section would be fine. Add timings for each section. That’s quite important
34
for when you’re getting a structure, so you know how long you’ve got for each piece. So
35
when you are making creative decisions like shots or narration or voiceover, et cetera,
36
make sure that it matches. So in each section, describe what a narrator will be saying or
37
interviews needed, images needed, visuals, music, et cetera. So each section should have
38
all those details, everything I need to create an AI video for this. Then I want them to,
39
if I go back over to this point here, request specifics, any key points, outlines. Right
40
before I talk about the tone, I want them to format this simply in sections and bullet
41
points. Then finally, the tone of this documentary is informative, but emotional. Worrying about
42
the future implications of AI. Complete the documentary with a strong conclusion and action
43
point for the viewer. So you can copy this for your own video if you want to use these
44
points right here. You can see what they are. I’m going to generate that, and then we’re
45
going to see what this brings out and how a structure differentiates itself from a script
46
and how you might want these. So section one, introduction, the dawn of AI. Purpose,
47
introduce a topic of AI, setting the stage in a brief history with an emotional hook.
48
That’s great. Narrator opens with, in a world increasingly driven by intelligent machines
49
and seeds of artificial intelligence was so long ago. Visuals, archive footage, yet we
50
could generate that with AI of computers in the 1950s. Black and white photos of early
51
AI pioneers like Alan Turning, John McCartney, glitches and digital overlays subtly hinting
52
at uncertainty. Music, a hopeful yet slightly ominous orchestral score gradually building
53
up. We could definitely ask Suno for that. You saw me do that in the section where we
54
did workflows. That was great to check that out. Tone, intrigue, mix, sense of nostalgia
55
and caution. So this has given me the structure here. What I could do, and I’ll show you in
56
a moment, is copy that and say, okay, now write me a script for this section. Section
57
two, rapid advancement takes shape. This is for this one minute and 15 seconds. That
58
first one was for the first 45 seconds. Purpose, outline the key milestones. So this is going
59
to be where we’re going to differ slightly here. You’re not going to use AI for this
60
necessarily, but we could use other visuals, graphs and animations. So in the exponential
61
growth of computer power and data, I could have a general graph generated with AI or
62
that kind of imagery, not a specific one showing dates and times. That would be something
63
created perhaps if you were, I don’t know, familiar with something like After Effects,
64
then you could have that. Clips of pivotal moments, deep blue, defeating. Okay, you’d
65
have to look into, so from this structure, I could use AI to research exactly what these
66
are. So this thing visuals between decades showing advancements. I could easily get historical
67
advancements, historical pieces, sorry, and looking imagery to go along with this. Transition
68
from a hopeful to a more intense music. No problem. Interviews, soundbites from AI researchers.
69
You could grab those from YouTube, et cetera, or create it yourself. It’s very unlikely
70
that an informational video like this, you are only going to purely use AI. You’re probably
71
going to download interview clips, perhaps from YouTube and things to use inside or stock
72
footage pieces that you found, but use AI to flesh that out. So AI in our daily lives.
73
Then we’ve got the next section. Now it’s everywhere. This is again, one minute and
74
15 seconds. Images or applications, how they use this. Okay. Section four, implications
75
of a world transformed. Narrator raises the question, but in our pursuit of progress,
76
what are the consequences? What does it mean? Machines make decisions for us and gets a
77
bit more ominous here. Conclusion in section five, the last 45 seconds. Wrap this up with
78
a cultural awareness and responsibility, encouraging viewers to think critically about the future
79
of AI. So it doesn’t matter what the topic here that I’m showing you for this video,
80
which is about AI. This matters that I’m showing you what a structure looks like. I use structures.
81
Even when I’m making a video that say a comedy video or a trailer, I’m like structure me
82
out a trailer for a new Pulp Fiction 2 movie that doesn’t exist. Make me a structure and
83
it will bullet point like this. And then I’ll flesh a script out from it, from that structure.
84
If I’m happy with the structure, it’s like you’re getting your structure down first and
85
then flesh it out with a script because it’s much more difficult to be trying to move around
86
the structure of a script if you don’t have a structure to start with, because it’s already
87
fleshed out somewhat. So if I took this section right here, I quite like, I’m going to say,
88
okay, I’ll create a script around this. So I don’t need to tell it about the tone and
89
all those things. Again, it’s already in there. We’ve already told ChatGPT what the tone is.
90
It knows all that information and it’s going along with what we’ve already told it before.
91
So, an intriguing tale of nostalgia is a hint of caution. Okay, the screen is black.
92
We hear a soft hum of an old machine starting up, followed by the click clank of keys of
93
a typewriter. An old CRT monitor flickers to life. We could easily generate that using
94
11 labs for the sound effects and then any of the image generators, the video, to get
95
some old monitor flicker like this. In a world increasingly driven by intelligent machines,
96
seeds of artificial intelligence were sown long ago. Cut to gritty black and white footage.
97
Cut to photos of Alan Turning, etc. Okay. Yeah, narrator. It’s a time of boundless
98
curiosities and ambition when visionaries like Alan Turning and John McCartney dared
99
to dream of thinking machines. And you see, it’s breaking it down now. It’s actually giving
100
me what a narrator says with the scenes that go along with it, including music. Even if
101
I fade slow, fade to black, etc. Flickering monitor here. Music slow builds up. End of
102
section one. So it’s really fleshed out from that. But if I was to just get a script straight
103
away, get this script of all these parts, it’s going to be hard for me to then go, actually,
104
I want my second point to not be about these specifics. I want it to be more general, etc.
105
But if I have a structure first, I can change that after. So do think somewhat of creating
106
either structure first into a script, or perhaps you only need a structure. You don’t need
107
a script depending on the videos. Perhaps you don’t need any of this, and you’re just
108
going to go along and wing it and just start creating. Absolutely fine. Whichever works
109
best for you. I just want to show you. Sometimes people forget about making video structures
110
as opposed to actual scripts, but I’m here to show you that they are very useful. And
111
I work way better working with bullet points like this than I do working with a whole lump
112
of text looking at a script. Depends on how you work best. So that was script versus structure
113
and I wanted to show you it. Next, I’m going to show you how you can generate everything
114
you’re generating here in ChatTPT with a set tone. That’s by setting yourself kind of an
115
AI companion, if you like, in the voice or tone of either a theme, narration, or a person.
116
I’ll explain more. It’s really good. It’s like having your own professional AI assistant
117
to help you generate these. I’ll show you that in the next lecture.
— Custom GPTs: Setting the Tone for Your Scriptwriting —
1
So this is something a little bit fun
2
here with ChatGPT that I want to show
3
you where you can actually create your own
4
GPT.
5
So your own assistant if you like with
6
creating your scripts.
7
This won’t be too long and I won’t
8
go into too much depth we’ll just do
9
this together.
10
If you’re in the ChatGPT here go over
11
and you can see explore GPT and you
12
can see right here they’ve got the top
13
picks.
14
These are like a code tutor, someone to
15
make your resume and then you’ve got writing
16
here, write for me, humanize AI.
17
These are all GPT’s that have been created
18
as an aid if you like for whilst
19
you’re creating your prompting and trying to create
20
the product that you’re trying to create with
21
AI with ChatGPT.
22
Now obviously the program is so vast and
23
you know that just like as in with
24
humans every single question could mean something very
25
different to every single person.
26
In the same way your ChatGPT experience would
27
be very different if you input something, if
28
I input something based on our history or
29
based on the way we phrase things slightly.
30
There are so many different ways to interpret
31
everything.
32
So a GPT can help you create a
33
voice if you like, someone you can use.
34
I keep saying someone like it’s a real
35
person that’s what it feels like with AI
36
doesn’t it?
37
Somebody to aid you, something to aid you
38
along with creating.
39
So what I’m gonna do is I’m gonna
40
create a GPT up here okay.
41
It says I’ll just follow this along and
42
show you exactly what we do with this,
43
how we do it and then you can
44
do this too.
45
So I help you to build a new
46
GPT.
47
You can say something like make a creative,
48
it helps me generate visuals or make a
49
software engineer that helps me format my code.
50
What would you like to do?
51
I would like you to be the Quentin
52
Tarantino the film writer to aid me with
53
writing scripts in his style always.
54
So what you can do here is you
55
could ask it I’m gonna do this I’m
56
gonna ask it to be Quentin Tarantino who’s
57
a very famous script writer and director who
58
has a very unique style.
59
So now if I ask them to this
60
GPT to produce scripts or structures like I’ve
61
been doing and I’m using the GPT that
62
is essentially going to be Quentin Tarantino yeah
63
okay so we name it yeah sure then
64
it’s going to give me the result as
65
if you are Quentin Tarantino.
66
So if you like that kind of style
67
of writing or this could be any director
68
Wes Anderson you could be using historical Alfred
69
Hitchcock or someone you could be using them
70
to make sure that that’s the style that
71
you want okay.
72
So now they’re creating a name for it
73
and let’s create a picture yes here’s a
74
profile picture great yes thank you that’s very
75
polite of me.
76
Okay you’re welcome Quentin scribe is all set
77
up to help you those Quentin Tarantino style
78
scripts have any more adjustments or if you
79
want to try it out just let me
80
know what’s your first scene idea.
81
Okay let’s see if this does this okay
82
I’m gonna actually use something similar to what
83
we did before so write me a scene
84
I mean a two minute I won’t use
85
the full prompts like we’ve been doing in
86
the previous sections for the sake of making
87
this not too long for you.
88
Write me a two minute scene based in
89
Japan during World War 2 between two lost
90
soldiers one American and one Japanese.
91
Let’s make that a five minute scene so
92
there’s enough and I want to see if
93
the dialogue and things are coming off and
94
the scene is like Quentin Tarantino.
95
Let’s just hit enter now I’ve given it
96
hardly any instruction intentionally and on purpose so
97
I haven’t said do it in the style
98
of the theme is like this it should
99
be set like this these are the characters
100
I want to see if it starts doing
101
things like Quentin Tarantino and if you know
102
any of his movies the Pulp Fiction Kill
103
Bill anything like that you’ll know it’s gonna
104
have quick wit dialogue probably a bit of
105
blood and gore or something let’s see what
106
it does here okay interior bamboo forest day
107
thick mist rolls through an isolated bamboo grove
108
birds in the distance gunfire echoes softly almost
109
drowned out by the buzzing of the cicadas
110
the greenery is endless and uniform creating a
111
disorientating maze enter Jackson late 20s American face
112
smeared with dirt uniform torn he breathes heavily
113
disorientated rifle ready his eyes dart left to
114
right he’s lost rustling sound to his left
115
Jackson spins pointing his rifle towards the noise
116
out steps hero early 30s Japanese equally ragged
117
holding a bolt action rifle they both freeze
118
eyes locked cool scene they speak in their
119
respective languages Jackson stay here are you and
120
then hero speaks in Japanese they both recognize
121
the fear in each other’s eyes universal language
122
of desperation Jackson gestures to his rifle for
123
hero to drop his weapon hero hesitates put
124
it down or I will if he doesn’t
125
understand the words he gets intense slowly lowers
126
his rifle place to the ground okay we
127
the same cocks his head what are you
128
saying hero loves gestures towards him you me
129
no fight Jackson swallows okay let me scroll
130
through this I won’t read you this whole
131
script on here no fight pulls out a
132
crumbled cigarette from his pocket offering it to
133
Jackson yeah this scene got cool are you
134
serious he’ll ask if he’s American yeah yeah
135
yeah okay Jackson not understanding this you know
136
the way out he tilts his head trying
137
to comprehend there’s some ways how mimicking someone
138
lost in circles and he chuckles and he
139
nods they’re the South okay what I want
140
to do is see if there’s and they
141
part ways okay it’s quite a cool scene
142
it hasn’t really given me the quick wit
143
kind of thing that Tarantino so we can
144
train it a little bit more okay if
145
I go over to configure right here let
146
me just check this out right description right
147
Tarantino with sharp dialogue dark humor nonlinear storytelling
148
embody Quentin Tarantino’s signature style this is all
149
good I need help writing this conversation I
150
like Tarantino approach of a plot twist so
151
what you can do here is you can
152
actually upload files so if I got the
153
script I can download the script of Pulp
154
Fiction I can upload it and I can
155
start training the model to do that so
156
that would aid definitely I mean is a
157
cool scene and it’s set up and they
158
go side by side bump into each other
159
have this conversation smoking a cigarette etc fade
160
to black yeah it’s a bit like more
161
like the modern Tarantino stuff with once upon
162
a time in the West but let’s do
163
it again and let’s do run this scene
164
again but have a Tarantino plot twist at
165
the end some blood and gore and sharp
166
witty dialogue so I’m giving it a slightly
167
a bit more instruction there and then it’s
168
going to be trained on that let’s see
169
what it says right here okay so now
170
we’ve got slightly more Tarantino it feels like
171
in my opinion through his gritted teeth he
172
says just my luck stay right there in
173
Japanese he gets it okay look I don’t
174
want to paint the forest of your brains
175
okay put it down okay yeah now we
176
get him more like Tarantino here okay then
177
they decide no fight no fight cautiously they
178
lower their guns have a cigarette together okay
179
so now they’ve decided there’s gonna be peace
180
they’re gonna walk away yeah peace as they
181
do so they start to notice he noticed
182
heroes bootprint a peculiar neat tread Jackson’s mind
183
races suddenly hero draws a concealed knife slashing
184
Jackson Jack and barely dodges the bait graze
185
his arm he stumbles backwards oh so we’re
186
doing this now we’ve got some cool dialogue
187
here hero laughs aiming for Jackson’s gut okay
188
mocking flu Kench teeth I thought we were
189
having a moment yeah okay this is really
190
good hero glares up spitting blood very Tarantino
191
sweeps his legs out from underneath him Jackson
192
swings the baseball rifle like a baseball bat
193
wham hero’s head slams to the side blood
194
splattering the bamboo Jackson breathe has the with
195
a generally still cursing through him here his
196
face is blooded and broken chuckles weakly muttering
197
in Japanese Americans always a surprise Jackson crouches
198
over him eyes burning yeah well I’m full
199
of surprises Cox the rifle shoving a barrel
200
against heroes forehead hero closes eyes accepting it
201
Jackson sarcastic whisper peace bang blood sprays everywhere
202
goddamn no happy endings in the jungle okay
203
now we’re getting more Tarantino now if I
204
run the same scene again actually let’s do
205
that testing now I’m training it the more
206
I use it let me just do exactly
207
the same first prompt we have here I’m
208
gonna see if it started learning from my
209
last input okay this time what I’ve just
210
read through this here I don’t need to
211
read this all to you they basically meet
212
up together and then they walk through the
213
jungle and there’s another soldier who shoots him
214
and then ends up being very bloody bullet
215
catches a soldier in the chest he staggers
216
back blood splattering all over the bamboo collapses
217
to the ground so it seems to have
218
learnt the more I run this the more
219
that it understands so this is how you
220
can get yourself and keep using the same
221
GPT if you are going to be using
222
a lot of scripts in a similar style
223
and configure this better upload yourself like if
224
I want to I could just search to
225
get myself the script here of Pulp Fiction
226
is available online I could probably even ask
227
chat GPT to give me the script so
228
I could download this script if I wanted
229
to download it like this okay let’s save
230
it as Pulp Fiction let’s upload files okay
231
just uploaded that here and now it’s ready
232
it just had loaded and it’s ready to
233
create it’s now understanding slightly more what we
234
could be making here and also if I
235
wanted to reference that directly and now full
236
understanding of that script training training training so
237
that was just another little tool I don’t
238
see people talk about this too much to
239
create a GPT around it you might not
240
most of you watching this will not keep
241
making scripts in the same voice maybe you’ll
242
have lots of different stuff but if you
243
do want to you can start training your
244
own GPTs to do this so next let’s
245
talk about refining a script the last stage
246
and chat GPT before I move on to
247
some other tools just briefly to show you
248
so you have a full arsenal of AI
249
tools at your disposal before I create the
250
script for our course project
— ChatGPT for Script Enhancement and Editing —
1
This is the last part of the scripting
2
process.
3
This is refining your script, making sure that
4
it’s the best that it could be.
5
So many times I see creators generate a
6
script, go with AI, it sounds pretty good,
7
go with it.
8
A script in the industry would have gone
9
through dozens and dozens of refinements into the
10
hundreds.
11
Even you can get people working on a
12
script whilst the production process is happening.
13
That’s quite often.
14
It happens and there’s continuity people to make
15
sure that things are consistent throughout the script,
16
which often happens also during filming.
17
So do not just generate your script and
18
it’s done.
19
Refine it with some of these tools I’m
20
about to show you.
21
Now there are six key areas for script
22
refinement.
23
I’m going to show you all of these
24
and the ideal prompt for each.
25
That’s the ideal prompt template for plot development,
26
for character building, for script analysis, for writing
27
dialogue, for script feedback and revisions, and then
28
checking sensitive content also.
29
What I’ve got is a slide with the
30
perfect, if you like, there isn’t such thing,
31
but with a good prompt that you can
32
use for any of your script and I’ll
33
show you what each one of these generates
34
when we use the prompt in ChatGPT.
35
Now you may not need this.
36
I’m giving you, this is extensive.
37
This is if you’re doing a 10-minute
38
film, you were doing a short film, you’re
39
doing a huge project, you were doing a
40
feature-length film, then you want to refine
41
this.
42
If you’re doing shorts that are two minutes
43
long, you’ll need some refinement but not as
44
much.
45
So take away from this the bits you
46
need and with your projects in mind, the
47
bits you don’t, ignore them.
48
But I’ll give you everything so that at
49
least you have that.
50
Now the first prompt I want to talk
51
about is a template for plotline development.
52
In the slide here, I’ve got it broken
53
down and we saw on site, if I
54
just go back to site, remember when we
55
were talking about what’s needed for the points
56
here with ChatGPT, that’s ideal.
57
I have them inside this prompt that you
58
can use and you can copy for yours.
59
You can see the point one in the
60
prompt, objective.
61
I give it a very clear objective you
62
can use.
63
Create original and compelling plotlines for my script,
64
focusing on, insert your theme and describe the
65
genre.
66
Character arcs and unique concept expand upon character
67
arcs and introduce unique story concepts that align
68
with the script’s theme and genre.
69
Ensure these ideas resonate with the target audience
70
and then provide what your target audience is.
71
Number three, story elements include gripping elements such
72
as conflict, resolution and character growth to create
73
well-rounded narrative.
74
Then flexibility and detail.
75
Provide enough detail to establish a clear narrative
76
direction but allow room for adaptation and further
77
development, which we have another point for this.
78
And then commercial and critical appeal.
79
Aim to suggest narratives that are not only
80
engaging but have popular and commercial success and
81
criticism.
82
That could be anything from trending topics to
83
newsworthy, consumer-wanted topics, etc.
84
that AI can be judging for.
85
Now what I have back in ChatGPT, this
86
is an idea that we spoke about a
87
few lectures ago and it was called The
88
Blue Line.
89
And it was about the young girl who
90
lives in a futuristic city that’s controlled, has
91
an automatic transport system that’s controlled.
92
She gets off at a train stop that’s
93
decommissioned and finds a rebel group that are
94
living there.
95
So I’ve asked it to generate, ChatGPT has
96
generated me a 10-minute script.
97
So if you already have your script, then
98
you can paste it in.
99
Or right now I’ve asked it to generate
100
it.
101
So it has my 15-minute script right
102
here.
103
So now I can ask it with those
104
prompts just based on that.
105
So I can say if I paste in
106
that prompt, I’ve just shown you objective.
107
So you can just say, with regards to
108
the script above, to make sure.
109
But you don’t need to.
110
One, objective.
111
Create original and compelling plot lines for my
112
script focusing on.
113
And then the theme, let’s say the theme
114
is wonder.
115
And describe the genre, it is sci-fi,
116
I want to say.
117
And the character arcs developed.
118
Yep, yep, yep.
119
Ensure the ideas resonate with my audience.
120
Let’s say these are young Gen Z.
121
The story elements include gripping elements such as
122
conflict, flexibility, commercial appeal.
123
Okay.
124
And let’s run that.
125
Now it doesn’t matter on this example what
126
the actual content of my script is, because
127
yours is going to be different from mine.
128
I just want to show you how using
129
this can help.
130
So when you run that prompt, you’re going
131
to get compelling plot lines to focus on
132
wonder.
133
And then you could see here, it’s giving
134
mysterious glitches.
135
Introduce the concept of digital shadows.
136
That’s a nice thing right here.
137
Time and memory.
138
Ray experiences subtle distortions in time and memory,
139
such as missing moments and gaps in her
140
daily routine.
141
And they’re going to disparity between surface level
142
utopia controlled by an underlying reality.
143
Character arcs.
144
Ray’s journey.
145
She has internal conflict right here.
146
She grips with fear of deviating from the
147
norm and losing her identity in the world.
148
New goals.
149
Nice.
150
Juno, her mentor figure.
151
This is their arc.
152
Unique concepts.
153
Rebel philosophy.
154
The older rebels believe in a concept of
155
three paths.
156
They’ve named this.
157
Create blue lines connected to a hidden network
158
called Echo.
159
Consists of old forgotten systems.
160
And they named that Echo.
161
Great.
162
So this is really bringing in depth here.
163
We’ve got names of things like three paths.
164
Echo.
165
Introduce small and mysterious objects known as glitch
166
artifacts to rebels collect.
167
Great.
168
And then gripping story elements.
169
That’s the internal conflict of Ray.
170
The external conflict.
171
The intergenerational conflict between she’s young and the
172
people that she met along here.
173
The resolution, the emotional resolution that Ray finds
174
herself.
175
Hope that she gets again to resolve her
176
internal and external conflict and character growth and
177
flexibility in detail.
178
The script offers a clear narrative direction via
179
flexibility for further developments.
180
There could be more developments between Ray and
181
the rebels.
182
This has relevant, sort of commercial and critical
183
appeal.
184
Relevant themes for Gem Z.
185
The narrative explores themes of control, conformity, self
186
-identity, all relevant with that generation.
187
Strong visual elements.
188
Cyberpunk aesthetic with glitch effects.
189
Futuristic landscape is visually engaging and commercially viable.
190
Appealing to fans of sci-fi films like
191
Blade Runner and The Matrix.
192
And here’s about its critical appeal.
193
And here’s where it’s actually put in my
194
redefined script here, the outline.
195
So it’s actually put in with the times
196
here where you can start putting in some
197
of these changes.
198
It’s even done that for me.
199
This is really good.
200
It’s a really great prompt to be using
201
to refine this, just to develop it.
202
Sometimes when we get our first script, it’s
203
a little bit surface level.
204
And when you add these small little things,
205
like I ran a script the other month
206
and I was asking for something about a
207
character, just developed that a bit more on
208
some themes.
209
And it turned out that we needed an
210
object for the character that connoted kind of
211
vulnerability.
212
And he ended up having an asthma pump.
213
And it kind of, there’s a metaphor behind
214
this asthma pump.
215
And it’s just those kind of depth that
216
you get within your scripting when you re
217
-prompt and re-prompt like this.
218
So that was just refining on the plots
219
here, plot line.
220
Let’s move on to, I want to talk
221
about character building.
222
If I just paste in that prompt right
223
here that we did right there, I’m just
224
going to say for this script, you could
225
put in your script name if you have
226
multiple for this script that’s above.
227
And it’s going to tell me all about
228
developing the character, if you like.
229
So let’s see what this prompt results with.
230
Great, that’s completed.
231
So this, what it does is list it
232
down here into main characters.
233
This is really great background information you have
234
for your scripts.
235
So you can understand if there’s continuity throughout
236
your script, isn’t character acting as they should
237
and things like that.
238
So we’ve got our main character, Ray, and
239
they have all about her background, personality, motivations,
240
her character arc, how we spoke about how
241
she starts in one way.
242
She has this kind of equilibrium, disequilibrium, distrust,
243
back to a state of equilibrium.
244
And then we’ve got all the other characters
245
right here, their age, motivations, characters, personality.
246
So you have a full list here.
247
This is going to talk about my character
248
arc.
249
For example, Ray’s, and we spoke about, her
250
growth involves learning to take responsibility for change
251
and trust in her instincts.
252
She evolves from a passive observer to an
253
active rebel leader, symbolizing hope for the resistance.
254
And it does the same for all characters
255
in here.
256
And in the contribution, we’re going to see
257
what each character contributes and why they’re here.
258
If a character isn’t contributing them very much,
259
then remove them.
260
They’re not needed.
261
Don’t pad the script out with unnecessary fluff.
262
And then we talk about the depth and
263
consistency.
264
Really good prompt.
265
Really love that one.
266
Just so you have a background knowledge.
267
That might be more for, you definitely do
268
this if you’re working in the real world
269
to help your actors, but you are being
270
every single actor in this.
271
So you need to know the background and
272
depth of all of your actors inside these
273
stories because you are creating them.
274
Now, of course, you want to develop your
275
script.
276
That’s the next one, the ideal prompt template
277
for script analysis.
278
So you want to make sure you get
279
analysis from your script.
280
This is like meeting, I don’t know, a
281
script analyst professional and saying, hey, what’s wrong
282
with this and what’s right with my script.
283
If I paste that in, let’s talk about
284
that.
285
So what this does is basically evaluates the
286
structure.
287
Yes, it’s in a three act structure.
288
That’s clear.
289
That’s good.
290
There’s engagement and suspense, which is needed to
291
help the retention of a viewer, make sure
292
they stay engaged and watching.
293
They have identity issues.
294
Great recommendations for improvement.
295
This is my favorite part.
296
And it says to reorder and tighten the
297
introduction.
298
Give some examples there.
299
Focus on the visual storytelling shift, rebel dialogue
300
to show interaction, smooth transitions.
301
It’s telling me make sure there are transitions
302
between scenes, what’s needed, feedback and rationale.
303
So it’s talking about the opening again.
304
Start with Ray on the train amidst a
305
bustling crowd before showing the small, significant glitch
306
introduces mystery and sets up her curiosity quickly.
307
Great.
308
It’s giving me actual action points that I
309
can be using to make sure when I’m
310
creating these visuals, it does everything.
311
It’s these small little details and nuances that
312
a very advanced filmmaker would know and understand.
313
But if you’re more of an amateur or
314
early in your career doing this, it’s these
315
small things that you miss that really tighten
316
and tell a story.
317
Things like this.
318
We need to show that there’s something wrong.
319
She’s curious.
320
She’s intrigued by the world around her or
321
put out by it somewhat.
322
And there’s small little actions you can do
323
that really help cement that as that’s that
324
character’s character.
325
And then it goes down to enhance scene
326
breakdown for the blue line.
327
It’s got my timings right here and advice
328
that it can be have.
329
So this is really good.
330
This is probably the most important, if not
331
my favorite, this and the first prompt that
332
we use for refining our scripts.
333
Please copy and use that for yours.
334
Now, the next one is with dialogue.
335
Now, you might not be using dialogue if
336
you are looking at the AI limitations, but
337
you may be or you may be doing
338
it as a narrator.
339
I’m just going to paste this in and
340
see what it says.
341
So objective, write a powerful monologue for, let’s
342
do this for Ray.
343
For Ray, who is experiencing, I’m saying she’s
344
experiencing insecurity.
345
If we look at developing and then yeah,
346
I’m just remove this.
347
And let’s see when it starts developing this.
348
This is the dialogue sections for this.
349
So here we are.
350
ChatGPT has given me dialogue.
351
So when we have the introduction, which we
352
said needs somewhat of development in the earlier
353
section, it’s given me dialogue here.
354
I never thought I was meant for something
355
more.
356
It’s always been easy to just follow the
357
lines and she’s under a train on lines.
358
Great.
359
You know, everyone else seems sure about their
360
place, like they were just born with manuals
361
for their lives.
362
But me, I’ve always felt like I’m waiting,
363
waiting for something to make sense for a
364
sign that I’m not broken.
365
You could have this all as rather than
366
her say, obviously, it can be internal, you
367
can have a voiceover.
368
And while it suggests that scene, for example,
369
while she’s on the train, and there’s glitches
370
that we just spoke about, you could have
371
this, you could have this couldn’t you as
372
a narrator, her voice, internal voice, as she’s
373
looking out on the train.
374
That’s nice.
375
And there’s loads more examples you could be
376
using for dialogue throughout this.
377
And it talks about tone and everything else.
378
So if you do have dialogue, or you
379
want narration, keep working and working and working
380
that because the smallest nuances in words that
381
are said or the way words are said
382
can really make a difference with moving your
383
story forward.
384
So the last part, which is much like
385
the one we just did before, I want
386
to get a feedback kind of in step
387
by step for this script and produce detailed
388
constructive feedback.
389
So if that second or third one we
390
did wasn’t enough, then this script is going
391
to give you and if you’d like this
392
structure, I like bullet point style, list structures
393
really help, then you can get feedback from
394
a great script advisor, which is ChatGPT doing
395
that.
396
So there’s script structure, there’s the strengths is
397
telling you right here with recommendations, dialogue, strengths,
398
recommendations, character development, strengths, and then recommendations.
399
The pacing, once again, I love this got
400
recommendations first that create an early incident that
401
hints at Ray’s curiosity.
402
Great.
403
Expand the resolution slightly to show Ray’s taking
404
her first active steps in leading the movement.
405
Nice.
406
Overall strengths, recommendations, identified issues and recommendations,
407
smooth transitions, specific feedback and rationale, and conclusion.
408
Just some places to be tightening with the
409
pacing, refining dialogue, deepening the character.
410
Great.
411
This is a really good prompt.
412
Also, please copy and paste this on any
413
of your projects, probably this one, no matter
414
how big or how small, because when you’re
415
developing a project and you’ve been doing it
416
for hours, days, weeks, even, you become blind
417
sometimes, creative blind to your project.
418
And it’s great to have, in this case,
419
an outside voice reading over and giving you
420
recommendations, things you haven’t seen.
421
And don’t be stubborn and be like, no,
422
no, my script’s fine.
423
Do take on board, not everything that’s said,
424
but at least some.
425
And the last point is a big one
426
right now.
427
We obviously have to, you don’t need to
428
listen to any of these.
429
This is your project.
430
You are free.
431
You have freedom of speech, obviously.
432
But depending on where you are placing your
433
videos online or festival or wherever, you may
434
want to start looking at, are there any
435
sensitive issues I haven’t realized?
436
And some of you creating this in certain
437
cultures around the world will have a different
438
view on how other cultures will take this.
439
And while ChatGPT has a slightly westernized American
440
skew on things, this is really good, I
441
can just say for the script.
442
And it can start telling you, are there
443
any issues I haven’t thought about here?
444
Like, do I represent a certain identity of
445
person in a certain way?
446
So we can have a look here, stereotypes
447
and problem.
448
The core themes are set in the blue
449
line around autonomy, control, and rebellion against the
450
thought system.
451
While the narrative focus on these broader themes,
452
portrayal of characters, social structures, and movements can
453
be carefully examined.
454
Okay, feedback on the issues, portrayal of rebels.
455
Make sure we don’t think that anyone outside
456
of the norm is seen as peculiar, strange.
457
Ray is a young leader, gender and age.
458
We must develop the relationship between Ray and
459
Juno, who’s slightly older than her, and she’s
460
an underage teenager.
461
Make sure it’s done thoughtfully to avoid reinforcing
462
gender or generational stereotypes, such as older woman,
463
infallible mother figure, or young girl needs saving.
464
Handling of cultural and social economic dynamics, suggestion
465
for improvement, diversify the rebel group, and some
466
other points right here.
467
You might just read this over.
468
You might not want to make any changes,
469
but just to make sure you’ve covered all
470
bases.
471
There’s nothing you’ve missed here.
472
You haven’t suddenly gone, oh, no, I’ve realized
473
I’ve made every negative character, a certain race,
474
gender, or identity in this, and I didn’t
475
even realize it, and it could be perhaps
476
mistaken by some as negative, or I could
477
have done more here to be inclusive, etc.
478
Just make sure, especially if you’re going to
479
see in my video that we’ve been developing
480
for this, I obviously talk about as partly
481
Japanese culture and as partly American over wartime.
482
So I’m going to have definitely some sensitivity
483
come back with regards to that that I
484
need to make sure I’m aware of.
485
So those were the six main points with
486
the different prompts.
487
Please go ahead and use those prompts when
488
you are refining your script.
489
Now we’re going to go on, and I’m
490
going to show you some other tools outside
491
of ChatGPT.
492
I’m just going to give you a quick
493
overview of four more tools that we can
494
look at, and then you can go ahead
495
and decide whether you want to use ChatGPT
496
like I have, or any other of these
497
scribbler.
498
We’re going to look at WriteSonic, and also
499
there’s TextCortez, and Gemini also.
500
So let’s go over now, and I’ll show
501
you some of those tools.
— Squibler: Crafting AI Video Scripts with Ease —
1
The first in the next few lectures, I want to show you some new tools that I’ve been
2
playing with. I was suggested these by a friend of mine who writes scripts using AI and said
3
there’s actually some specific tools, AI tools now meant especially for creating scripts.
4
So first, we’re going to talk about Squibbler. Now Squibbler and the coming up tools I’m
5
going to talk to in the next two lectures, especially a really fun tools to use. I’m
6
going to go through this as if you are a new person who’s never used Squibbler before.
7
So this is what you’ll see. So just five minutes or so for the next few lectures, just showing
8
you these tools. Perhaps you want to use them. Perhaps I use them initially and then take
9
what is developed into chat GPT. These are ideal for people who haven’t written scripts
10
before who don’t know the fundamentals or what they should include. It really aids with
11
that. So let’s go over to Squibbler.io. Let’s just start this for free and take a little
12
look at this tool and step by step. Now, it’s really helping you. What do I want to create?
13
Okay, let’s just go through these. I want to create a script. Okay. Now, do you want
14
to upload it yourself or no, I want to generate this. I want to generate this with AI title
15
optional. What’s your script going to be about? Now, if you haven’t generated ideas, you need
16
to go ahead and do that. And I’m just going to put in somewhat working around the project
17
that I’m going to be doing at the end of this. Two children during World War Two. One
18
is in the USA, Pearl Harbor, and the other is in Japan, Hiroshima bombing. The script
19
is from the perspective of the children and loss of their fathers. Okay, create an outline.
20
So the first thing Squibbler does is give us kind of an outline. Okay, so it’s got Act
21
1, Act 2, Act 3 already. And it’s given me here life before the storm. And what’s happening
22
to set up if you like, with the incident that happens the Pearl Harbor attack, the Hiroshima
23
bombings, and then finding hope, letters of hope, and remembering the past. So from this,
24
if you’re happy, you could say collect project from this outline. And if you want to go into
25
the elements right here, it’s got the characters here, a young boy living in Pearl Harbor,
26
his father’s a naval officer, Tom’s mother. And then there’s also a co a young girl living in
27
Hiroshima. Her father is a carpenter. And then there’s the mother to the settings, Pearl Harbor,
28
and Hiroshima. The object is the attack on Pearl Harbor, the atomic bomb objects,
29
also the letters, photographs, memories, it divides this into elements. This is really,
30
really good for scriptwriting. If you don’t have any experience in scriptwriting whatsoever,
31
this is perfect for you. Okay, let’s create the project from this outline. Now this is where you
32
will obviously this is not a free tool. There’s billing involved right here. I can pay monthly
33
$29 right now, and you’re 16 a month. And you do have a free three day trial though,
34
which I believe is canceled anytime. So you could try this, see if you like it. And if it’s a tool
35
you’re going to use a lot, then it’s great because it does a lot of that work for you.
36
I can also click here, I want to continue with the limited plan. And then you can continue and
37
see it’s going to prepare my project for me. And right here in a text document, I have lots about
38
my project. The script opens with two scenes, Pearl Harbor, Hiroshima, Tom Lively. Okay,
39
young boy, here’s the scenes, scenes, scene, scene. And it really, oh yeah, shifts to a few
40
years later, Tom is all grown up. They’ve both found solids and strengthen their memories.
41
So it’s really dividing me right here. These are my scenes here. Here’s act two,
42
if I want to click on that, the scene shifts to this and act three. And it’s divided this down,
43
which is really, really good. I can also this is auto auto compiled, or I could click guided,
44
which is where I can add prompts if I want to elaborate on any of this if I want to,
45
and which elements should be included, etc. But this is really good as a starting point,
46
I could copy this, I could select it all, copy it. And then if you don’t want to pay for anything,
47
of course, I could then go back into chat GPT, I could say generate me an actual script or
48
structure like we’ve been doing using this. And this is a great way that it’s developed me a story
49
step by step through Scribbler. So that’s a great tool I just wanted to show you.
50
Let’s move on. And let me show you the next tool for AI script writing.
— ChatSonic: AI-Powered Storytelling for Video Scripts —
1
The next tool to aid with script writing,
2
specifically AI script writing, is WriteSonic.
3
I believe the company is called ChatSonic, they
4
have SocialSonic.
5
Sonic is their theme.
6
But I’m going to show you this one,
7
much like the last tool we looked at
8
in the last lecture, Squibbler, is specifically to
9
aid with writing.
10
Not just script writing, which Squibbler is far
11
more aligned with that.
12
This is for all writing, both general writing,
13
emails, etc.
14
etc.
15
Just like if you are coming on here
16
for the first time, I’m going to show
17
you this.
18
You’ll have credits and I’ll explain that in
19
a moment.
20
You can say, okay, what will we create
21
today?
22
Let’s go into view all because scripts are
23
not on here.
24
Let’s go view all.
25
Now I can scroll through and I can
26
look at all of these if I wanted
27
to, one at a time.
28
There’s everything from a follow-up email that
29
you need, subject lines, LinkedIn ads, email replies,
30
blog posts, article writers, humanizer to make it
31
more human, etc.
32
Let’s go into write and let’s type in
33
the word script.
34
Now what it’s going to come up with
35
is not film script, but I really like
36
this one, YouTube video script.
37
It’s really, really good.
38
It’s good because we are all very much
39
a social media content creation savvy population now.
40
We’ve seen a shift that some TV shows
41
and movies that have been created in the
42
past would not be successful now because as
43
an audience, we consume so much social media
44
content that we need an instant hook.
45
We need a reason to watch, keep watching
46
and constant engagement.
47
We are the generation that has TikTok few
48
second reels with a very short attention span.
49
So actually creating a script for YouTube with
50
the specifics it has here is really, really
51
good because it’s going to generate a great
52
script.
53
Now you can do things like add yourself
54
a brand voice if you were doing this
55
for branding, etc.
56
and language generation, but we don’t need that.
57
So video topic.
58
For this case, I’m going to do a
59
documentary style video that you might be doing.
60
Let’s call it a tutorial actually, which works.
61
Tutorial, I’m going to say documentary.
62
Next, who is the intended audience?
63
So I’m going to make a documentary.
64
This is for the general public.
65
Next, what specific subject is the tutorial?
66
It is AI.
67
We did this in the other one.
68
AI history and future.
69
Next, what’s the style approach?
70
So do you want a step-by-step
71
guide for this or narrative driven, expert interviews,
72
animated explainer?
73
I would like narrative driven for this.
74
Generate.
75
So it has a script for me here
76
and it’s divided out in a perfect way
77
I can really relate to understanding this.
78
So you remember before I asked ChatGPT, I
79
was like, make this documentary.
80
It was a similar topic in five sections,
81
etc.
82
Well, you don’t even need to do that
83
here.
84
It understands for a YouTube video or any
85
video.
86
Now, we need to have sections for this.
87
It’s got an opening hook, narrator.
88
Imagine a world where machines think, learn, create.
89
A world where artificial minds work alongside human
90
intellect.
91
It’s in science fiction.
92
This is the world we’re living in right
93
now.
94
But how did we get here and where
95
are we headed?
96
Nice.
97
So it’s got a hook instantly.
98
Then it’s got your introduction section, historical, the
99
birth of AI, AI winter and renaissance, AI
100
today, the future of AI, ethical considerations, and
101
then a conclusion.
102
So this is really good.
103
It’s especially good, I think, for my initial
104
generation.
105
I might actually take this, copy it, and
106
then I might go back into ChatGPT and
107
I might say, this is great, but I
108
want a different tone.
109
For example, it’s talking about the future of
110
AI, literally what people are doing, but it
111
doesn’t have kind of an ominous tone.
112
I might want it to have some drama
113
amongst this, which this tool is great for
114
getting this first point.
115
If you’re not a script writer, then get
116
to this first stage with this.
117
And then perhaps I would use another tool
118
like ChatGPT to take this, paste it in,
119
and start adding things like your tone that
120
you want, because you haven’t got the freedom,
121
and you’ve seen this in earlier when I
122
was showing you on site, the different things
123
that are needed to mention with ChatGPT, i
124
.e. the tone, all these points right here.
125
You haven’t got the freedom to write that
126
in RightSonic.
127
It’s helping you by directing you, by giving
128
answers for this.
129
But it’s a great place to get your
130
first draft through.
131
So if you want to, if this appeals
132
to you, go ahead and check out RightSonic,
133
app.rightsonic.com, and you can see how
134
you can generate your own ideas.
135
I just think it’s especially good if you
136
want to create social media content.
137
You can see, actually, I’ve got their scripting
138
here for all kinds of things.
139
If I write a script, you can have
140
a TikTok Instagram real script.
141
So if you’re making social media, and I’ll
142
talk about that in a bit, this is
143
a really good tool for doing that.
144
But it’s also good, because it’s social media
145
savvy, understanding this opening hook and what’s needed
146
for a video.
147
So that was RightSonic.
148
Let’s go on to the next and last
149
tool that’s really specific just for script writing
150
with AI.
— TextCortex: Writing Precise AI Video Scripts —
1
Now, the last tool I want to show
2
you just so you’re aware of it is
3
TextCortez and then go away after this and
4
decide which ones you like the most.
5
And once again, of course, if you go
6
back to site, you can see how specifically
7
to use each individual one of these.
8
I’ve divided them down with example prompts and
9
kind of what each one is specialized and
10
used for.
11
Now, whereas the previous two I’ve showed you
12
are great at asking you questions to direct
13
your responses to generate a script.
14
Real noobs, it’s great for that.
15
This one is slightly more like ChatGPT.
16
You can pretty much ask it what you
17
want.
18
If I come in here and say, write
19
me a 15 minute script for this.
20
And this was our project we did a
21
few lectures ago.
22
It was about the blue line.
23
So I can just say, okay, and it’s
24
about the girl Ray and the underground, okay,
25
it’s generating this very fast.
26
And the underground kind of society in an
27
AI driven culture where everyone is given a
28
schedule to travel and then she finds a
29
rebellious rebel group in an uncommissioned station that
30
she gets off at.
31
So it pretty much does exactly what ChatGPT
32
would do.
33
So it’s broken it down.
34
It’s got characters on here, Lena, Ray, okay.
35
And then it’s got a lot of dialogue
36
in here.
37
So much like you’ll see when we’re going
38
to generate images, we use multiple different tools.
39
No two generations are the same.
40
No two tools are the same.
41
And you get all different responses on different
42
platforms that you are using.
43
So there will be different things in here.
44
Ray here is 16, she’s rebellious.
45
It’s got different characters that it’s generated, Lena,
46
Jasper, etc.
47
So I had one set of characters on
48
one AI platform, a different one with this.
49
It’s great just to have another tool to
50
see the response that you’re getting.
51
Now that’s not much different to ChatGPT.
52
Is it?
53
So why would I show you this?
54
Well, if I come back to the fresh
55
page, if we just creating a new chat
56
right here, I’m going to show you templates
57
that are great to use here.
58
So if I just click up on the
59
marketplace template, I can go on public to
60
see what’s around here.
61
I can either search a specific template, or
62
let’s go into here and go media and
63
entertainment.
64
That’s the best one.
65
I’m looking for a script.
66
And I am a screenwriter.
67
You could be any of these others here,
68
a screenwriter, and I can see what’s happening
69
here.
70
So there’s a YouTube video script, engaging YouTube
71
video script.
72
And yeah, let’s actually look at one of
73
these.
74
Let’s use this.
75
Now it’s using the YouTube video script outline.
76
So if I said an information educational documentary
77
about the history of AI and future predictions,
78
just like we did in the last section,
79
and we just create some, correct some of
80
these typos where I type not looking.
81
Yeah, let’s see.
82
Now it’s using this template.
83
Let’s see what it does.
84
Okay.
85
YouTube video script, rise and future AI introduction,
86
voiceover, main body, voiceover, voiceover, voiceover.
87
Okay.
88
And it’s got a conclusion here, suggested clickbait
89
titles.
90
Nice.
91
Unreeling AI, the shocking truth from science fiction
92
to reality.
93
What’s next for artificial intelligence?
94
So it’s using templates and there are lots
95
in here for different things.
96
Please go ahead and explore.
97
A bit like where I showed you before
98
we made GPTs in chat GPT, it’s using
99
templates that it’s been trained on and know
100
to create your scripts for you.
101
So that’s why I wanted to show you
102
this software in particular, but you’re going to
103
find now from which all the ones we’ve
104
been looking at, which ones you like the
105
most, probably chat GPT.
106
I do.
107
Or if you’re a complete newbie, then it’s
108
Scribbler and WriteSonic.
109
Now in the next one, I’m going to
110
quickly just show you Gemini again, which we
111
used for the idea generation.
112
But I think there’s something to be said
113
with script writing, especially for information videos using
114
Gemini.
— Gemini: Crafting Contextual AI Video Scripts —
1
The last tool I want to show you for script writing in this section, and we’ve already
2
looked at this in the ideas generation section, but I wanted to show you another free tool
3
here is Gemini. Now I’ll be honest with you, Gemini by Google is my least favorite AI tool
4
for generating scripts. I know some people do like it because of its structure sometimes,
5
but I would much prefer to use ChatGPT myself. But I will show you this tool for generating
6
scripts. So if I do something and I’ve been playing with this here, but I generated again,
7
if I generate a script, for example, let me go back to here, generate me a script about
8
a topic of AI historical. So a very generalized prompt, not even optimized for Gemini. But
9
you can see if you come on to here, I’ve broken it down how to optimize this even better.
10
I just want to show you the tool quickly so we can move on because people ask me about
11
free tools specifically. And Google Gemini has a lot of pros because it’s Google. And
12
it’s linked to YouTube. So if I was making content for online, it has a lot of great
13
research knowledge, because it’s Google, it’s connected to the world’s largest search engine,
14
obviously. So here’s the evolution of AI is broken this down. Now a lot of people do like
15
this layout that often comes with these bullet points that is really good at organizing text
16
and facts, I think. And of course, you can probably trust I never trust no one should
17
ever trust AI responses 100%. But I think factually, I’d slightly lean towards Gemini
18
because of its Google connections. But you should double check, triple check everything
19
that comes on here. But here, look, I always like that it comes up with the sources, for
20
example, medium.com. And I can go through and I can fact check this and also get inspiration
21
from any imagery that they have on here. Let’s check out another one. utech.co. Quite
22
a cool blog post. So I can go through and check my facts your nurse. Okay, and grade
23
fixer.com was the last one essay is an artificial intelligence. So if I was writing, I would
24
use Gemini, at least initially, if I was writing factual based content, if I was, for example,
25
let me show you a YouTuber that I love. So if I was Johnny Harris, the YouTuber, and
26
he makes a lot of great factual videos, everything from the rise of Hezbollah, and also things
27
like why people think the world is flat, what happens in China invade, and some really deep
28
topics here, North Korea, where you need to be factually more correct. Now, I’m sure he
29
uses AI when he’s trying to generate and get research. And he often shows the research
30
papers that he’s used for this. But just finding the research papers is a task enough. So if
31
I was generating something like this, using Gemini, it’s already listing me here, here’s
32
my key factual parts. And then you want to check these and use the research papers. Well,
33
here they are all listed below. Not the chat GPT couldn’t do this. But Google does it and
34
it does it really, really well. It may also sometimes generate you images for inspiration
35
with this. But if you are going to do information based news based stuff, then Gemini is great
36
for that and for generating your research for this alongside your script. You can now
37
ask obviously, based on this research and this to generate an actual script for speaking,
38
if you wanted to. Onto the last few lectures. Now in this section, I just want to go over
39
the structure of script to make sure you are aware of it. And then I’m going to go on and
40
actually create the script for the project that we’ve been working on throughout this course.
— Script Formats 101: Structuring Your AI Video Scripts —
1
Just a very quick video, I mentioned it earlier, but just a minute to show you to make you
2
aware of these downloads here for the course. So under video.aivideo.school, AI Video Scripts,
3
if I scroll down, I want to talk to you about the structure of a script. That’s everything
4
from the threat structure to how they should actually look. If you actually want to start
5
doing this and get quite professional with it, some of you will not. Skip this, doesn’t
6
matter. Just so you do know, I have these downloads right here. This is the British
7
Broadcast Corporation TV screenplay. If I click download, you can see this is the format
8
of a script. It should be centered like this. This is the Coronian New font, I believe.
9
And then it gives you all details. You put your address and everything in the left and
10
right hand corner. You put the series title in the center. You put the fade in exterior
11
location, day one, etc. All formatted like this. Now you could download these and upload
12
them into AI like I’ve shown you before to start training it. For example, when you made
13
a GPT here, you could make a formatted one for your scripting and you could say script
14
like this and you could start training it. So just like that, if I go back over, there
15
is also the screenplay format, fairly similar, some slight differences. Dialogue in the
16
center like this. And then also there is the three act structure, which is explaining it
17
step by step in a bit more detail about how we create all stories and how all stories,
18
generally speaking, have a beginning, middle end. Even the most basic of YouTube videos
19
have a beginning. Here’s a setup. You’ll see this in all big creators. There’s a problem
20
to overcome. Let’s wait and see if you overcome the problem. So that was just me making you
21
aware of that. Let’s go on to the next lecture. I’m going to actually make the script myself
22
for our project in this course.
— Course Project: Transforming Ideas into Scripts —
1
This lecture, I’m going to develop our course
2
project.
3
You’ve seen me go along now when we
4
generated an idea in the last section for
5
the project.
6
Now I’m going to generate a script and
7
next we’ll start generating a mood board, storyboard,
8
audio, images, video until we completed and we
9
submit this to film festival, an AI film
10
festival like we said.
11
And you can see the entire process if
12
you only to watch the last lectures in
13
each section where I do the course project,
14
you’d see how I go from zero to
15
final project.
16
So if you remember last time in the
17
ideas, I was using Claude AI, one of
18
my favorite softwares for generating scripts and ideas.
19
I’m going to actually copy down.
20
This is the idea we had.
21
It’s called When the Sky Changed and it
22
was about the young girl and young boy
23
in Pearl Harbor and Japan and how they
24
are both simultaneously going about their day and
25
both lose their fathers, one through Pearl Harbor
26
attacks and one through the Hiroshima bombing.
27
So I’m going to copy that I had
28
right here.
29
I’d yes, expand on this.
30
I’ll be using AI generation for this project.
31
This was AI generated specific shots focusing on
32
elements.
33
So I’m copying that over and I want
34
to use some of the stuff we’ve learned
35
here in developing scripts to paste this in.
36
So for this script video idea and I’m
37
going to paste that in and that’s got
38
all my scenes in here.
39
I’m just going to then go to some
40
of our stuff we learned during this section.
41
First thing I’m going to do is ask
42
it for script analysis on this.
43
I’m going to say complete script analysis and
44
then I’m giving it those 1, 2, 3,
45
4, 5 points.
46
For example, engage in suspense, identify issues, recommendations.
47
We did this over on the fourth lecture
48
in this section.
49
I’m just copying that exact one that I
50
wanted to.
51
The third slide there trying to get a
52
breakdown of what is good and bad with
53
this script and what it suggests for it.
54
So let’s see what chat GPT is going
55
to tell me in this analysis of this
56
script.
57
Script structure, structure and pacing.
58
The script structure is balanced and effective alternating
59
between the two perspectives simply and symbolism.
60
Yeah, that’s good.
61
Recommendations for pacing.
62
Open stronger.
63
Instead of starting with close-up shots of
64
the children’s hands, consider an initial wide shot
65
of each setting around the grounds, the viewer.
66
That’s exactly what I think.
67
Climax timing.
68
The shift from serenity to intensity in the
69
climax is key.
70
Consider building tension slightly earlier.
71
Very good.
72
Engage in suspense.
73
The juxtaposition of both children’s experiences builds up
74
suspense, especially as the climax.
75
But the recommendation is midway contrast.
76
In the middle sequence, add visual cues that
77
reflect each child’s anticipation or joy.
78
Yeah, so you’re building the audience, loving the
79
child a bit more, feeling more emotionally connected
80
and then this whole thing will happen.
81
Intensify climax transitions.
82
All right, identify issues, opportunities for improvement, emotional
83
clarity, pacing of the small details, recommendations, more
84
visual indicators of the setting to impact.
85
Yeah, these are all good, all great.
86
Okay, I actually like every single one of
87
these.
88
Okay, I’m going to ask it to generate
89
a script based on all these recommendations.
90
Including, I mean, not based on, including all
91
these recommendations.
92
Okay, I just want to see, did it
93
say anything about the characters here?
94
Overall strength, smooth transitions, issues, engagement, suspense.
95
Okay, what I want to do, I’m going
96
to see what it comes up with that.
97
But do you remember that slide about the
98
ideal prompt for character building?
99
Okay, I’m going to just say, I mean,
100
let me just wait, actually, let’s do this
101
one step at a time.
102
Let me just see what the script has
103
done here.
104
And then I’m going to build on character.
105
So wide shot, opening scenes, 1940s sunlit room,
106
simple close up of hands, young American girl.
107
Yeah, Hiroshima side, wide shot.
108
So now it’s adding this stuff in where
109
it wants the wide shot, close up, boys,
110
fathers, glasses, restaurant, windowsill, catching the light.
111
Okay, and then the middle section, medium shot,
112
close up of face, additional medium shot, close
113
up, the climax.
114
All right, it’s got everything here and timings
115
for that.
116
Great.
117
What I want to do is paste in
118
this ideal prompt for character building.
119
So in this script, analyze and suggest changes
120
or improvements to the characters.
121
Okay, typing too fast.
122
Okay, and I’m going to paste that in.
123
So if you want to go back to
124
get that in the third or fourth lecture
125
in this section, you can see that exact
126
prompt.
127
So suggestions, Grace Turner, Lillian, Lily, Harris, Evelyn,
128
Evie, Hayes.
129
Detailed backstory, Grace, or Lily, is an American
130
girl, eight to 10 years old, living near
131
Pearl Harbor.
132
Greater arc, initial stage, Grace starts with a
133
pure, unguarded perspective of the world.
134
The room is warm and peaceful.
135
As it begins to dark, a great experience,
136
a shift, an unease, symbolizing her first realization
137
of potential danger.
138
By the end, Grace holds her father’s Navy
139
hat, symbolizing a blend of pride and loss.
140
Okay, narrative, contribution, and depth.
141
Grace’s character is crucial to illustrating the innocence
142
of those who are indirectly impacted by war.
143
That’s true.
144
And now let me look at the boy
145
here, Kieto.
146
Okay, detailed background of them, initial stage, much
147
the same, much the same.
148
Okay, the hope and resilience endured even times
149
of danger.
150
His origami cranes symbolize peace and optimism.
151
Character arcs and evolution.
152
Summary, suggest improvement.
153
To empathize a shared sense of innocence and
154
resilience, creative visual parallels between the children’s actions.
155
For example, Grace drawing her family and his
156
family holding cranes can be shot in a
157
way that mirrors each other’s movements.
158
Oh, that’s quite nice.
159
Or you could have them doing the same
160
thing.
161
They could both be drawing and see one
162
family is the same as the other family.
163
We are just two families on the other
164
side of the world here.
165
Maybe that’s something nice.
166
Add layers to the children’s emotional shifts.
167
Rather than portraying their shifts, simply wonder, turn
168
to concern.
169
Refine visual elements for character context.
170
Add in subtle details to reflect each other’s
171
cultural backgrounds.
172
Grace’s room could contain small photographs of her
173
father’s.
174
Okay, while he or she might have family
175
heirloom like calligraphy scroll.
176
Symbolism, final character list.
177
Okay, and then it’s broken this down for
178
me.
179
All right, what I like here is it
180
gave me an idea, it sparked something here
181
by having them both drawing their families instead.
182
And where I’m thinking about that, they’re both
183
the same.
184
I want to say this Japanese person and
185
family are the same.
186
This American person and family are the same.
187
They’re both innocent bystanders here that got caught
188
up in war doing their duty.
189
So regenerate the script story but this time
190
have the children in the opening scene both
191
drawing their families and see they are the
192
same.
193
Make both of the characters young girls the
194
same age to draw even more similarities.
195
All right, let’s see what it does here.
196
I guess this time we have a 1940s
197
American young girl.
198
Yep, just before close up of her hands
199
drawing.
200
Yep, and now similar room a young Japanese
201
girl around a shoji screen, rooftops, a girl’s
202
hands holding colored pencils, carefully creating a drawing
203
of her family and herself.
204
I like this better.
205
A pair of glasses belonging to her father
206
rests on the window still.
207
The morning light filter goes through.
208
Okay, and then the medium shot, the American
209
girl looks at drawing, Japanese girl looks up
210
from her drawing.
211
Yep, and then the Pearl Harbor side and
212
Hiroshima side.
213
Okay, nice.
214
What I want to, this is a nice
215
touch here.
216
What two names that are the same in
217
Japanese and Western?
218
Okay, Hama and Hama means flower.
219
Okay, Rina is used in Japanese and Western
220
context.
221
They don’t have to be spelled exactly the
222
same.
223
Let me do that again.
224
So I think it’s trying to spell it
225
when it converts it over.
226
What are two girls’ names that are the
227
same?
228
Sound the same, can be spelt different in
229
Japanese and Western.
230
Give me a list of 20.
231
Amy, Amy, that’s nice.
232
Anna used in both Hannah, Sarah or Sarah,
233
Sarah.
234
Emi, Emi, Maria.
235
Okay, I quite like this.
236
Amy, Amy.
237
Imagine they’re both called, it’s called Amy and
238
Amy or something.
239
Maybe I can think of, maybe this will
240
help.
241
Generate 10 ideas for this movie title and
242
perhaps using Amy and Amy.
243
Amy under the changing sky, Amy in the
244
silent horizon, Amy sky.
245
That’s quite nice.
246
Amy in the shadow of the sky, the
247
day Amy saw the sky change.
248
Oh, that’s quite nice too.
249
Amy of two horizons, Amy’s last morning, Amy
250
in the falling sky, Amy beneath the silent
251
sky, Amy sky before and after.
252
I quite like some of these and it’s
253
given me a great insight.
254
So what I want to do now is
255
finalize this, I think for the latest update,
256
including all changes we spoke about.
257
Generate a script for the short video.
258
Include all scenes, descriptions, any dialogue or narration,
259
which there wasn’t any to start with, list
260
it out clearly.
261
And this is going to be my script
262
I’m going to use here.
263
Okay.
264
Here’s a refined script for the short film,
265
Amy under the changing sky.
266
Okay.
267
Narrator.
268
Oh, so this is now, change this.
269
Let’s see what this one, in the quiet
270
hours of dawn, two young girls on opposite
271
sides of the world begin their day like
272
any other.
273
Okay.
274
Amy, a Navy ships in the background.
275
Yeah.
276
Close up hands drawing a young girl also
277
named Amy sits low wooden tables, serene, drawing
278
her family.
279
So updated that.
280
Innocence in the form of tiny hands captures
281
what they know, family, peace and love.
282
So you can now, I could generate this
283
with the narrator’s voice and without the narrator’s
284
voice.
285
Something in me says it’s a little bit
286
corny, but we’ll see.
287
Middle sequence, Pearl Harbor, transition, medium shot, close
288
up of Amy’s face, additional details.
289
Okay.
290
Medium shot, medium shot.
291
What I want is regenerate this script, but
292
add one line that each of their fathers
293
say to them.
294
Visually, we don’t see their father’s faces, just
295
lower half of body.
296
Just one line needed to aid the script.
297
Okay.
298
Let’s see.
299
Oh, forget the typos.
300
ChatGPD doesn’t care about.
301
Here’s a revised script for Amy under the
302
changing sky.
303
This time, so in the quiet hours, still
304
the same.
305
Wide shot, close hands drawing, additional detail.
306
Navy hat belongs to a father, rests on
307
a table, casting a soft shadow over the
308
room.
309
Father’s line, camera, lower half of father’s body.
310
Amy’s father in his navy uniform stands next
311
to her.
312
His hand gently rests on a shoulder.
313
Father’s line, always remember Amy, I’ll be with
314
you no matter where I go.
315
Something like that, but that’s a little bit
316
too cheesy, I think.
317
Let’s see what the father’s line, you’re strong
318
Amy, remember that.
319
No matter what happens.
320
Too cheesy for me, way too corny.
321
I’m going to run the same thing again,
322
but I’m going to tell it to not
323
be so cheesy.
324
This is where it would have been good
325
to have one of those GPTs we were
326
creating in the other one where we had
327
some quick wit or something.
328
The line should be generic, about the day,
329
a normal line for the character, not profound,
330
but have a hidden meaning.
331
That makes you think, oh, that’s something said
332
or gives a clue to what’s coming up
333
perhaps, but nothing overly cheesy.
334
Something more like this, make the most of
335
today Amy, I’ll see you this evening.
336
That’s nice.
337
And what does the father say here?
338
Be good today Amy, I’ll be home before
339
you know it.
340
Nice.
341
This is what I’m talking about.
342
Something more like this.
343
Amy glances up, smiles and nods.
344
Innocent form of tiny hands, captures what they
345
know, family, peace and love.
346
Okay, this is it.
347
This is my script that I’m going to
348
use.
349
Technical details, lighting, transitions, set audio, color palette.
350
Perfect.
351
Okay, so this is what I’m using for
352
my script.
353
I like this a lot.
354
Let me just have this copy paste and
355
I just like to store it elsewhere.
356
What I’m going to do as a last
357
stage, this is I’m going to go back
358
into Claude.
359
I’m going to direct Claude, break this down
360
into simple shots explaining them and I will
361
be creating this video with tools only.
362
So able to be produced with AI limitations.
363
Okay, and paste that in and go.
364
Let’s see what we got here.
365
Okay, let me break this down for you
366
with AI generation tools in mind.
367
Close up, a girl, fully style, simple house.
368
Nice.
369
Yeah, great, great, great, great, great, great.
370
And it’s burned down in a bullet point
371
shots here.
372
This is what I’m taking away with me.
373
So if I just get myself a document,
374
what I like to do just inside a
375
document here just on Google Doc is I
376
paste in my script.
377
This is what I had for the script
378
here.
379
And then I’m also going to paste in
380
the thing I just got here from Claude,
381
which was a breaking down the shots for
382
me.
383
So I’m actually going to copy that with
384
the information it gave.
385
Just so I remember what that section is.
386
And I’ve got this here to work with.
387
So now when we’re working out the next
388
section, which is doing our mood boards, and
389
then our storyboards, I’ve got all the information
390
I need, I know who my characters are.
391
And then when I went down here, I’ve
392
actually got what every single scene is for
393
me, I could go in further and say
394
break down what the girls look like, etc.
395
But I don’t need to.
396
I don’t need to.
397
Okay, so that was that I’m going to
398
go ahead and set a task next.
399
Before we go into the next section, where
400
we’re going to actually be generating some images.
401
And we can go ahead and start making
402
a mood board and think what is this
403
film going to look like.
— Task: Write Your AI Video Script —
1
Unsurprisingly, the task for this section, if you’re
2
moving on, if you’re following along with us
3
in each section of this course, you’ve generated
4
ideas, you probably had five or so of
5
those, maybe you’ve narrowed it down to one
6
now.
7
Generate your script using all the tools that
8
we said.
9
If you use one of the platforms or
10
multiple that we’ve gone through, use some of
11
the prompts that I provided you to deep
12
dive that script and characters, and then generate
13
yourself a script.
14
Use this page, by all means, download to
15
make sure you know what it looks like,
16
the three act structure, whether you’re making a
17
short or slightly longer video, and here’s all
18
the information you need for each of the
19
tools.
20
And then, you saw me use the tool
21
that we used in the ideas section, and
22
chat GPT from this section, and just make
23
myself the script here, have it, just store
24
it away, wherever you want to store it.
25
I always store it in Google Drive on
26
my documents here.
27
Wherever you want to store it, of course,
28
do that, and get yourself a script ready
29
to move on.
30
Because in the next stage, we’re going to
31
get creative now, finally, you’ve probably done enough
32
generating scripts and ideas, now we can get
33
creative and start generating some images and some
34
mood boards, and start designing what these are
35
going to look like ready to make our
36
images.
— AI Audio Tools: An Overview —
1
AI audio, this section is obviously crucial. If you were to skip everything else, that’s
2
the scripting, mood boards, everything like that, you need at least three elements here
3
to make AI video that is audio, you need audio, music or voiceover in your video as well as
4
images to create video. There are three main sections and this is the first one of those.
5
So in this section, it’s broken down into kind of two halves. The first half we’re going
6
to go over music generation. So you can imagine if you need a school or some background music
7
or something, we could generate this with AI or perhaps you want to have a song that
8
has lyrics to it, a specific topic or something funny maybe perhaps, I’m going to show you
9
how to do that. And that will dictate everything that you need for obviously your visuals etc.
10
If that were the case. And in the second half of this little section, we’re going to talk
11
about voiceovers or narration, getting dialogue, be it a voiceover, a narrator speaking of
12
your video, or perhaps you need actual dialogue said by one of your characters in your video.
13
And I’m going to show you how to do that. Now, obviously, AI audio is changing everything.
14
No longer do I need to go away and have paid stock footage or music rights or something.
15
There’s quickly becoming a thing of the past. I could generate music just like this.
16
Got my boots, got my hat. Riding in my pickup flat. Cowboy swagger, that’s a fact.
17
Absolutely for free and it’s copyright. AI has allowed me to do that. I don’t need to pay for that music anymore.
18
And also, I’m even in a place where I don’t need to speak like this. I could clone and
19
generate my voice to say something to you like this.
20
Hi Dan here, AI Dan. This is me speaking to you through 11 labs. This is great. I never
21
need to actually speak again.
22
That whole thing was done by cloning my own voice. No longer do we just sit here with
23
a microphone and do a voiceover if I don’t want to. It’s changing everything and I’m
24
going to show you in this section. Now, the main tools that we’re going to be using here
25
are 11 labs.11 labs currently at the time of recording. And if it changes, I’ll update
26
this. It’s by far, I think, the market leader with regards doing text-to-speech, speech-to-speech.
27
That’s changing the speech, cloning, etc. So I’m going to show you that.
28
There’s also Suno, Udio, which are going to be used primarily for making soundtracks,
29
scores, as well as I’m going to show you a few tools. We have a tool called Filmora,
30
which is actually an editing platform, primarily similar to CapCut or something.
31
I’m going to show you Filmora for some great AI tools with audio and a handful of others
32
in this section. Now, this is not to be confused with at the end of this course, we’re going
33
to do sound effects. We’re going to AI generate, for example, if I smash the bottle against
34
this table, then I could use AI tools like 11 labs actually to generate that sound.
35
I don’t need to go away and find that or record it myself. That’s done after in post-production.
36
This stuff is done first for a video, and I’ll explain the process in the next lecture.
37
So let’s begin. Let’s go over. I’ll explain to you the process and why we need to do this
38
first and then let’s get creative.
— Why Audio is Step One in Your AI Workflow —
1
So, some of you might be asking, why are we doing audio right now?
2
Why is it important to do that first?
3
As the first creative stage, if you like, if you ignore script writing, if you mean
4
an actual creating, we are creating something either audio or visual.
5
This is that first point.
6
And the reason we do that first is for several reasons.
7
So, don’t skip ahead thinking I can do this later because you’ll probably want this right now. I’ll explain.
8
Yes, in traditional TV and movie, audio would normally be in post-production.
9
It would be the thing you add at the end.
10
When you are trying to create an emotion behind something, you perhaps would get music underneath it.
11
If you need a narrator, a voiceover, yes, you do that after so it fits your scene.
12
Primarily, it’s done in post-production, but as far as audio goes for AI video, we do this
13
first in a similar fashion to you’ll find that actually animation studios do this first.
14
When they’re creating things for Pixar or Disney, for example, they do the voiceovers
15
with the celebrities or the voice artists, whoever it is, they do that first and they
16
animate to the voice.
17
And it’s much the same way.
18
Now, when we’re creating our visuals here, we are obviously evoking an emotion or telling a story.
19
Now, you need to get your dialogue first.
20
If you’re going to lip sync this, for example.
21
You need to know what it is you’re going to lip sync.
22
If you’re going to create a story with a beat, then you’re going to need to know the music.
23
So I’ve kind of got it down to three points here for the advantages of doing this first.
24
If we generate music, and I’m going to show you a couple of tools in the next couple of
25
lectures, create music first, it may dictate our content.
26
You create something that has a really slow intro you weren’t thinking of and builds up.
27
It may dictate that actually you start from black screen and fade on in.
28
It may have a certain pulse behind it that then dictates the pace of your edit.
29
If the music you generate has a pulse in the intro, then you may have visual shot cut,
30
visual shot cut on each one of those beats.
31
Now, there’s no point if we were to edit and make all our videos, and then we add in our
32
soundtrack, we go, oh, and we move it all around, then we may find we’re actually short
33
and need more content.
34
But if we have our audio first and create to it with AI video, you can save yourself
35
a lot of time and headache.
36
So that’s another huge advantage.
37
And lastly, of course, voiceover will aid production.
38
So if I was actually having a narrator, and they say something like, I don’t know, in
39
a galaxy far, far away, we meet our protagonist, Luke.
40
Now, if he says that I might have my scene of a galaxy far, far away.
41
And as he says the name Luke, I know to cut to that shot of Luke to match what he’s saying.
42
If I’m just creating these blind and then make the audio after, I’m going to have to
43
edit my shots to that as opposed to have the audio just saves yourself a step.
44
And of course, if you know you’re having dialogue and how long that dialogue lasts, if I notice
45
three seconds of dialogue, five seconds or 10 seconds, I know the length of shot I’m
46
going to want to create with AI tools for a lip sync.
47
That’s really important and kind of crucial.
48
So if I was to create a production flow for this AI video in general, it kind of goes
49
generate your voiceover dialogue first, have that store that have it say that’s probably
50
the most important part that’s telling an actual part of your story.
51
Then let’s generate the music and score, which might dictate the pace, feeling, emotion.
52
Then we’re going to go on and generate our video that’s using our images and then our video itself.
53
And then we do sound effects after.
54
Now, the next few lectures, we’re going to talk about the next seven lectures actually
55
are about music generation, and we’re going to use a couple of great sites that I love.
56
Sometimes I’ll just go on there just to play, just to see what music I can generate with
57
lyrics, without lyrics.
58
They’re really, really good.
59
And then after that, we’re going to the voiceover part and narration where I can clone my voice
60
you saw me do, or I can clone someone else’s voice, find certain types of voices that I
61
want, get it to speak in a certain way from text or use my own speech and then change
62
that voice based on my own nuances and mannerisms.
63
Really great stuff.
64
So that’s just the reason, the thinking behind why I do this first.
65
Let’s go on to the next lecture and begin.
— Suno: Create AI Music for Your Projects —
1
The first tool in this section I’m going to show you is Suno, which is just at suno.com.
2
You can free for you to sign up, you’ll be confronted with a page, something like this. Just log in.
3
You can buy credits, I’ll show you that stuff later, but you get 50 free credits, which
4
is anywhere between, depending on what you’re doing, five to 10 songs.
5
And that’s updated every single day.
6
So many of you won’t need to even buy credits for this.
7
I’m going to show you what this does really, really good.
8
In fact, I think this is my favorite AI audio tool.
9
I think I came across it when I was trying to find how to generate songs about a certain
10
topic, comedy songs, and I came across Suno.
11
So this is the homepage, you come here.
12
Right now, the artist Timberland is doing something with a sponsorship and you can see
13
other people’s work.
14
They’re doing the explore tab, quite similar.
15
You can scroll through and you can be inspired by, let’s listen to Korean Afrobeat.
16
So that’s a funny mix.
17
So you can get inspired by the explore tab right there.
18
But what you’re going to want to have predominantly is create, this create tab right here.
19
And let me show you that because I’ve been working on some funny things here.
20
I was just trying to see the limitations of it.
21
Everything from Make America Great Again, a Donald Trump comedy song in country form.
22
I’m country and I’m sexy.
23
See if I can make a comedy song there too.
24
A song about Yellowstone, Dutton’s Ranch Rodeo.
25
I’ll explain all these and show you some examples soon.
26
So this is where you can create your music.
27
Now there are some advanced things here that I don’t really use.
28
Make sure you’re on version 3.5, which is the newest one at the time of this recording.
29
I could upload, perhaps I want to upload my own audio.
30
Perhaps I’ve sung into mic and recorded it and I want just the background music for my own lyrics.
31
You can do that.
32
And custom is where you can get a little bit more advanced here.
33
And I can start putting in some lyrics.
34
Perhaps I have some lyrics that I want to do here or the style of music in more depth,
35
but you don’t need to for most of you what you’re doing.
36
So let me give you some examples of what we can do here and how simple it is.
37
There seems to be, and I’ve tested lots of different ways to do this, no set ways to
38
best prompt with Suno, but I’m going to show you what I do.
39
I start with the style of music first.
40
So if I go country music, okay, and then I can say something like a song about.
41
And then what do we want to create a song about?
42
This could be anything you want it to be.
43
So if I go a song about how AI video will take over Hollywood.
44
That’s what it’s about. Okay. Topic.
45
So I go style of music topic, and then I want themes and stuff.
46
So make it funny comedy is what I say.
47
And then if you had anything else, like I want it to be called AI is the future or Hollywood is dead.
48
If you want something like that, you could say include this in the chorus, call it this,
49
but you don’t have to.
50
Now right now I’m having lyrics.
51
I could make it instrumental, but I’ve obviously want to make it funny, a comedy song.
52
So I can click create and watch this as quick as three, two, one and done. Okay.
53
That was almost done.
54
Uh, that’s, that’s it. That’s it.
55
That’s real time.
56
I didn’t speed this up.
57
Uh, let me start playing you this.
58
I’m going to play you some of the lyrics.
59
Come over on the side here.
60
Hollywood things are changing fast.
61
The rector scratching heads, wondering how to last robots got a camera listening to the scene.
62
Y’all better bubble up for the AI machine.
63
Johnny depths replaced by circuit board chap, Merrill’s doubles digital.
64
Ain’t no need to clap next Oscar winner.
65
Silicone ain’t that a sign. Okay.
66
So you can see it. That’s funny.
67
And yeah, the voice, you can hear it’s slightly robotic ish, but pretty — good.
68
And it gives you two versions.
69
It’s going to have the same lyrics, different tune.
70
Let me show you this one.
71
Out in Hollywood, things are changing fast director scratching heads. Okay.
72
So that’s what it always does.
73
It gives you two versions.
74
And then if I want to, I can just click and I can download these.
75
There’s a version with video or there’s audio.
76
So I know that our YouTube channels that just use Suno, that just generate songs and upload
77
them with video every single day, creating a YouTube channel, uh, doing this.
78
So that’s how I use that, but I use it for a lot of background music also.
79
So if I say, um, I want a background song, I’m going to have it like, uh, strings and
80
piano background music, build up epic dramatic.
81
Make sure the instrumental once again is on and create, and this is how you can get some
82
really good background music.
83
Now you have all these credits available.
84
See that just took five credits per song and it did two songs.
85
You can of course buy more credits.
86
I’ll show you that in a moment.
87
But you’re probably not going to need it if I’m honest with you here.
88
So let me just play that. Really good.
89
So you’d have to, this is the same as you would pay for stock music that would cost more than this.
90
And this is free obviously, but with the subscription, this is copyright free for you.
91
You can use this in your, in any of your projects.
92
Let’s play the other one. Nice dramatic.
93
So it’s really fun.
94
It could make this public and everyone else can hear my songs, but it’s really, really
95
fun to do and you should go along and play with this.
96
So if I was doing my projects, for example, I’m going to go into this in more depth at
97
the end of, uh, today.
98
But if I was making my piece that we know is set in the USA and Japan, um, I perhaps
99
would say Japanese inspired, uh, with Western influence or Western with Japanese influence, et cetera.
100
Uh, I could start generating that, but not a huge one to show you here.
101
I can show you actually, let me just show you how much this costs, but to click on credits
102
and subscription.
103
So zero a month, 50 new credits daily.
104
That’s 10 songs free per day, 10 pound a month, uh, $10 a month. Sorry.
105
We’ll get you 2,500 credits.
106
That’s 500 songs for $10 a month.
107
Um, very, very reasonable for what it does.
108
And you could see that.
109
I mean, if I just go on to, uh, I can show you actually one second here, if I show you
110
this channel on YouTube here, they’ve used Suno to make this country song about Harry
111
and Meghan, Prince Harry and Meghan Markle, but they are — and obviously used AI
112
video over the top of it.
113
Have a little listen to this.
114
So you could quite easily create your own song about anything you’re interested in. Any topic.
115
I have some here about, uh, Yellowstone, uh, you saw I have on there.
116
I’m a big fan of the show and it actually can show you, actually, if I just grab that
117
song for you here, let me show you Dutton’s Ranch Rodeo and you see the lyrics here actually
118
intelligently using AI knows about the show, characters, storylines, et cetera.
119
So it knows the show’s set in Montana.
120
It knows what it’s about.
121
They run a ranch.
122
It knows the character Rip and his character.
123
It knows Beth and how she’s got a temper.
124
It’s using AI to generate in seconds here, lyrics, tunes about topics and accurate, great information.
125
Please utilize Suno if for anything, every day, just go on there and play and see what
126
you can generate.
127
You’ll probably be inspired by some ideas to create some stuff here.
128
Go and check it out.
129
And on the next one, I’m going to show you another quite similar, not quite my favorite,
130
but still pretty good.
131
I don’t want to show you as many as possible video.
132
Let me show you that on the next lecture.
— Udio: AI Audio Creation for All Projects —
1
The next tool I want to show you is Oudio, which is a lot like Suno we saw in the last
2
lecture. So maybe you’ll have your favorite. I want to show you this and how this works.
3
So in exactly the same way, if you come over to Oudio.com, there are free credits and there
4
are paid credits also. And you can put in your prompts. And I want to say prompts because
5
both of these tools, the reason it doesn’t matter entirely how you prompt is that automatically
6
both tools, I believe, but they automatically actually explains it well here. Change the
7
prompt. They rewrite them to best suit their own needs within the tool. If I was to click
8
manual mode here, I’d be fully in charge and it would take my prompt as I set it. But if
9
I leave that off, it knows, Oudio knows what it needs to break down a prompt to give you
10
the best result. So often leave this off and you’re able to put in your suggestions.
11
Now it has it has these genres of music right here along the bottom you can select. So let’s
12
try one, shall we? On here. Let’s go for let’s go for a hip hop song, a funny comedy song
13
about Kevin Hart and the Rocks bromance. Let’s put that in there. It’s going to be topical.
14
It’s going to be able to find lots of information about this. And let’s just give it that. Let’s
15
create it. It knows it’s hip hop. It knows that. Let’s create. OK, this finished generating.
16
Let’s hit play and see how these compare. Now, these are you can see, actually, I should
17
show you this. It has the credits right here. I selected two credits. So that 32 second
18
clip or I could have done here and done four credits and got up to 210. That’s compared
19
to Suno, which on the latest version has up to three minutes and often creates three minutes.
20
I also had it auto-generated lyrics. You could write the lyrics or have it instrumental.
21
So let’s have a little listen to what these sound like. Kevin Hart and Dwayne are doing
22
so strong, cracking jokes and slamming all day long. Coming at you with that rocky charm.
23
Hart’s got the funnies, no alarm. Cruising down the street like a buddy cop show, making
24
everyone laugh, sealing the flow. The bromance keeps us training. Can’t deny these two dudes
25
OK, nice. It’s got that old school kind of hip hop.
26
Rocking with my brother, your heart and rock, lifting weights, cracking jokes. We never
27
stop on the stage, busting laughs in the gym. Lifted. Hollywood famous. Yeah, but gifted.
28
Dynamic duo never needed script.
29
Nice. It’s super cool to hear this, isn’t it? Just to have this instantly created for
30
us so, so far. So in exactly the same way, I do use this a lot for background music.
31
But if you are doing, if you are creating a channel or something, social media, perhaps
32
that’s around a topic, or if you had a scene and there was a hip hop song like that or
33
country or anything, maybe with lyrics talking about a situation, maybe not the characters
34
directly. This is awesome for that. Now I’ll show you the upgrade options on here. Free.
35
You have 10 credits per day, additional 100 credit per month, $10 a month, $30 a month
36
gets you all of this. So much the same as Suno. It’s going to be which one you prefer.
37
So go ahead and play with those. I love this. And it gives us so many options for both our
38
music for our projects, which we’ll come to at the end, but also whole channel styles.
39
If you wanted to create something by this on social media, using AI audio with AI visuals.
40
Super super great. Okay, let me go and show you in the next lecture, some AI tools in
41
editing that you can use to extend tracks, remove lyrics and things like that. I’ll show
42
you Phil more in the next lectures.
— Filmora’s Audition Extension Tool —
1
Just a quick one about some really cool things I want to show you of AI and audio with some
2
other tools. So this is Filmora, which is actually an editing program itself. There’s
3
a free version. You can go online and find it and log in. You can create an account.
4
You can also pay monthly for this and get access to other features. It’s somewhat similar
5
to CapCut or an online or slightly more full version of maybe Premiere Pro. Even there
6
is so much to this. Very, very user friendly with the interface and stuff. Far more friendly
7
than say Final Cut, Premiere Pro, some of those more advanced tools. If you’re not an
8
advanced editor, you want to learn to edit. I fully suggest Filmora. Really, really great.
9
Loads of tutorials online. I’m going to show you some cool stuff here with audio. So I
10
have here, this is Song Rise Up. We actually created this two lectures ago on Suno. If
11
I drag this onto here, this is, let’s turn it down slightly. This is the song we created,
12
that instrumental song. Great. And that is three minutes and a touch. That’s fine. What
13
if you wanted, well really it finishes here. What if you wanted that longer? How annoying
14
to try and go in. I could try and put these two side by side and then mesh them together
15
or something. That would be ugly. Luckily, there are some tools and other software has
16
it but Filmora that I like to use has them too. If I go up here, you see above here,
17
audio stretch, just click that. Now if I just drag this over, watch how fast this is. That’s
18
it. Now the audio, rather than finish where it did before, it comes back up again. So
19
using AI with the intelligence, it knows what the music sound like. It dipped, it came up,
20
dipped, came up, dipped. And now it finishes right here with a very similar, I’d say before.
21
What if I want that longer again? Oh, now it just makes that center part longer and
22
then. Super, super useful. So when you need slightly more on a track, you don’t need to
23
go and regenerate this and trot this together. Using AI right now, you can use it inside
24
Filmora and you can use AI Stretch. There are a few amazing tools. I’ve actually separated
25
them into three lectures. The next two also in Filmora. Some other tools I want to show
26
you. That’s super quick tutorial. I just want to show you AI Stretch. There are some really good
27
AI tools that you’re going to want to use for sure. So let’s continue. Let me show you the
28
next cool tool inside Filmora.
— AI Background Music: Filmora’s Tool Explained —
1
So, still inside Filmora here, really this software gets better and better the more I
2
use and play with it. So much from removing background instantly with a click, there’s
3
so much to this, even multi-camera stuff if you’re doing podcast editing. Honestly, I’m
4
going to go into this at the end when we get to the AI editing. I think this software above
5
all has a lot more AI features than others and very user friendly. So, we’ll talk about
6
that more. But right now, I want to show you another audio trick, another tool we have
7
inside here. So, if I have this video, this is the first one of those. So, in this section,
8
it’s broken. That’s a video of me from this section actually, I think, where we were filming
9
this. So, what you might want, even underneath this or you could have had some, I don’t know,
10
I could find some stock media right here. I could have grabbed this from the stock media
11
they have inside here, Filmora. I could have had anything really, it doesn’t matter. What
12
I want to show you here with an audio, an automatic with an AI tool, is how to get background
13
music. So, rather than going to like Suno and get background music for using credits
14
or whatever, if you already had access to this, you can use this for that. So, here’s
15
the beach scene, for example. Let’s do these separate and just see. So, if I’m talking
16
here and I want some background music, just a light little something behind just to create
17
emotion or perhaps I have a scene where I want to have some background music. Whatever
18
the situation is, it’s as easy as one click. Just here, these two arrows, I can go smart
19
background music generation. Click that, it’s analyzing the content, it’s intelligently
20
analyzing when I speak or what volume I speak, even the content on which I’m speaking about
21
to try and get us a track. Now, I can generate this once, twice, three times. I’ll get a
22
different result every single time. And yes, it’s copyright free for you to use. So, this
23
is a real easy, if you are choosing an editing software, do try Filmora. And this is super
24
easy to speed things up. You could have had your entire AI video and use some background
25
music and generate that automatically. Obviously, you probably want slightly more control,
26
but just regenerate and regenerate and use some tracks in some bits and not in others.
27
But it can do it automatically for you now using AI. So, let’s let that generate and
28
let’s have a little listen. That’s finished generating. Let’s have a little listen to
29
this. That’s the scripting, mood boards, everything like that. Nice. You need at least three elements
30
underneath what I’m saying. Let me just mute that and you’ll hear it. Nice, perfect. So,
31
just another tool I wanted to show you inside Filmora. Once again, I could regenerate this
32
again. I can go background music generation and it can do it all over again for me and
33
I will get a different result every single time. The great thing is, it’s already inside
34
my editing software. So, I could fade this in, fade this out, have it here, cut it even
35
if I wanted to. All inside the software. I don’t have to be downloading it elsewhere
36
and dragging it. It’s all inside here. So, that’s really, really good. So, there’s one
37
more thing I want to show you inside Filmora before we move on to some other lectures.
— Filmora’s Voice Isolation Tool —
1
So, the last little thing inside these three little lectures I want to show you with Phil
2
Mora on little AI tools for audio is the Lyric Remover or Separator I guess really it should
3
be. So, let me show you this. I’ve got this track on here that we took from Suno, I’m
4
country and I’m sexy. Okay. So, obviously it has audio and lyrics on there. Now, there
5
may be a situation where you need to have, for example, a scene starts and it’s fine.
6
There’s a montage of shots on there and you can have lyrics on there, but then maybe someone
7
starts to speak. So, you want it to fade down, still have the audio on there, but without
8
the lyrics or you might just like the track or you might want the lyrics acapella. Whatever
9
the reason is really simple inside for more. I can click on the track I have actually control
10
click or left click and then you see up here it says AI Vocal Remover. Click that and it
11
is removing it super quickly. It’s really fast to do and you can see the results that
12
we get. Incredible. I love we live in a time where we can do this. Also a way you can get
13
all your favorite songs and start making your own karaoke, obviously. So, that’s done
14
that took less than a minute. That was so fast. Inside here. I’ve got the original track.
15
I’ve got the voice and the background. So, if I mute this, obviously this will sound
16
on there. But if I just remove the background for a second, let’s listen to this.
17
So, now I have whatever option I want. I’ve got the background music and the lyrics. I’ve
18
actually seen AI videos where they’ve done this. I saw someone the other day make one
19
called Conway Fiddy and it was Conway Twitty the country music star and 50 Cent and they
20
took the lyrics from a 50 Cent song just like this just remove the background and then had
21
the audio from a Conway Twitty song and put them together. It was super funny, but there
22
can be obviously so many reasons you would need to do this. But to show you another AI
23
tool that’s super quick and easy to use that fits this section that we’re going through
24
here is this one that I want to show you the Lyric Remover. Now, I’m going to go in and
25
talk a little bit more about some ethics and rules or copyright. Actually, next lecture,
26
let’s talk about copyright because if I was to use some of this music and you plan to
27
put this online, then you’re going to have lots of issues or some you’re not going to
28
have any issues. So, let’s quickly go over and talk about that in the next lecture.
— Copyright and Social Media: AI Audio Tips —
1
I want to discuss with you about copyright,
2
social medias and rules when you’re using audio
3
and music, especially music, they are very important
4
and strict on social medias and also if
5
you were to do this to create for
6
short film festivals or corporate pieces etc.
7
There are lots of rules, especially regarding music,
8
even more than visual quite often.
9
So I’ll bring up the slide here and
10
I just want to go over some things
11
with you.
12
Now generally, music on social media faces strict
13
copyright limitations, unlicensed tracks can lead to takedowns,
14
muted videos or even account penalties.
15
Platforms like Instagram, TikTok and YouTube each have
16
their own unique guidelines.
17
Now I’m going to tell you what the
18
rules are generally speaking as of the time
19
of recording but please go over and show
20
you on site here, YouTube have their own
21
if you just search copyright music, you can
22
get YouTube’s, you can get TikTok’s, Instagram’s page
23
all about how to use music in your
24
videos and the same thing with TikTok and
25
Instagram and Facebook are the same platform of
26
course and you can see how to use
27
them if there’s been any updates but I
28
doubt anything will change from this.
29
So if I go back to the slide,
30
what you’ll find is platform specific, Instagram and
31
TikTok offer licensed music libraries for personal non
32
-commercial use while YouTube enforces stricter copyright rules
33
but provides monetization friendly audio options through YouTube’s
34
audio library.
35
So inside YouTube, they will have, if you
36
go into your studio on YouTube, there is
37
a whole platform of free music inside YouTube.
38
If I show you just here under this
39
test account I’ve got here for YouTube, if
40
I scroll down audio library inside YouTube, all
41
of these if I play one, that was
42
a bit scary that one and I can
43
easily go in and filter stuff by track
44
title, genre, so maybe I want cinematic stuff,
45
okay and it will just show me that
46
and it will show me the license type
47
you’re allowed to use this basically and still
48
generate money from YouTube.
49
Now if I was to use a track
50
by an artist, a famous artist or anyone
51
really who has logged it as not copyright
52
free, not royalty free, then what’s going to
53
happen is your video will probably in most
54
cases be able to be uploaded and go
55
out to the world still but any monetization
56
you use on that will go straight to
57
the owner of the music track.
58
So even if you made a 10 minute
59
video, a few seconds of it, 10 seconds
60
of it, let’s say 30 seconds are someone
61
else’s song, a song by a famous artist,
62
whoever owns the right to that music is
63
getting everything from that video, all the monetization
64
for it is going over there.
65
It may also be limited in certain locations
66
in the world, it won’t show the video
67
and similar you’ll see on TikTok rather than
68
do this, what they’ll do and occasionally on
69
other platforms also but TikTok mutes the audio
70
so your video will not get any impressions,
71
you’ll see it often at zero, it’s muted.
72
If anyone was to watch it, it would
73
be a silent quiet video.
74
They all have, if you’re uploading inside of
75
TikTok or any place in Instagram, they have
76
their own audio libraries of safe to use
77
audio content.
78
So a solution is either use the free
79
provided audio that’s inside these platforms or you
80
could use stock footage which I’m going to
81
talk to you about next which you’ve paid
82
for to have a license and some of
83
that stuff is actual great quality existing current
84
known artists and you’re allowed to upload to
85
all these social media platforms or at least
86
YouTube predominantly and be able to make money,
87
monetize these and not have any limitations on
88
them and you have great quality music but
89
that comes as a cost or you could
90
be using as we showed you in the
91
early stages of this section things like Suno
92
and Udio, you can create your own or
93
the background music inside Filmora, create your own
94
music track.
95
There’s no need to pay for any now
96
or have any of these limitations when those
97
tracks are generated with Suno and Udio, they
98
are yours, they are unique, they are free
99
to use on any of the social platforms
100
that you could want to.
101
Now if you are also, I want to
102
show on the slide, if you’re also producing
103
a film festivals, corporate events, especially corporate events,
104
if you have a client you’re making audio
105
for, make sure you are using the correct
106
audio that won’t get them, their business in
107
trouble, could get them fined.
108
You can go even further and go to
109
like PRS and get a license for this
110
like they do in the film and TV
111
industry but not necessary, we can make it
112
with AI right here or use any of
113
the free audio that’s available on these platforms.
114
So the other option was of course stock
115
audio which I’m going to show you just
116
quickly right now in the next lecture before
117
we get on with making some voiceovers and
118
narration and create those with AI.
— Finding and Using Stock Music for AI Videos —
1
To round off this section nicely where we’re talking about the music score that you would
2
want for your tracks, I guess I need to finish with stock music, which would have been the
3
only way just a handful of years ago for us to get music to use. We would have had to
4
have grabbed ourselves stock music. And yes, you could go onto YouTube and search royalty
5
free music, and then you download it and put it on your video and then find out it’s
6
not royalty free music. That happens with a lot of people. Please don’t do that. If
7
you’re going to use music that already exists by a creator, please use stock platforms just
8
like these. These are the main ones I’m going to show you that I’ve used a lot, but of course
9
they come with a price. So if you’re looking for the free option, AI or the free music
10
I showed you in the last lecture that’s available on each platform. These are music bed, which
11
I use a lot, which I really like epidemic sounds and also art list. Now, all of these
12
have great reputations and especially music bed, which I use predominantly. I’ve heard
13
their tracks. I’ve watched some stuff on Netflix and heard shows use the same tracks that are
14
inside here. If I search, for example, I know there’s an artist, I think her name is Lady
15
Brie. Yeah. Some of this stuff is definitely inside. I watched the Netflix show, uh, selling
16
sunset, one of those, um, real estate come reality shows. And it definitely had this
17
song inside it. So it’s obviously very good quality. These are actual artists that are
18
stuff out there. Um, I’ve also had some classical music that I found on here. There was that
19
I don’t even know the name, but you know, done, done, done, done, done, done, done,
20
that was on here to use. There was some real great finds you can get. And all of them are
21
much the same epidemic sounds and also art list. You all got a free trial. Pretty much.
22
You can see the huge brands that are using them and the kind of tracks that you can get
23
all available on here, but yeah, really nice. But I could have created that with AI, right?
24
We could do that. These are surely all going to have to transfer over to have an AI option
25
on here. Actually, I’m going to come out to you in the next section. We’re going to look
26
at art list. I’m going to suggest it for AI voiceover stuff. We’ll talk about it in the
27
next little, uh, few lectures and about five lectures, uh, time or so they obviously come
28
at a cost. Let me show you the pricing, uh, for some of these. So there is individual
29
here. View the pricing. I’m not a business. Uh, I am a YouTube creator, for example.
30
So I could have unlimited songs at $29 a month. And then if I do exactly the same here, uh,
31
pay monthly. Yeah. Almost $20 a month, 1799 epidemic sounds. And then art list. Uh, if
32
I’m doing the same thing, music and SFX, social music, FX pro anywhere from $9 to $16, uh,
33
depending on the package that you want. So of course there’s a cost. It’s not a huge
34
cost. Um, but there’s some other stuff to think about here. If you are using, I know
35
music bed, you upload your YouTube video first unlisted. It clears it in the background and
36
then set it as public. Uh, don’t do it straight away because it takes time to clear often
37
only minutes and it’s automatic, but there is that little thing to think about. Also,
38
if you have your music, your videos with this music on there, and then you stop paying for
39
a subscription, they will not stop your subscription unless you take down those videos where you
40
use this before. So you are kind of stuck with, if you want that content still existing
41
out there on YouTube and things, then you are stuck that you obviously need to keep
42
your subscription going. So that’s how I was going to round off this section. Just talking
43
about stock music. Obviously we have to cover that when we’re talking about getting music
44
for our videos. So let’s continue on. We’re going to go now and talk about getting our
45
voiceovers and narration. You’re going to see me clone my voice, clone some other voices,
46
create some great, great narration and dialogue pieces coming up.
— ElevenLabs: Narration for AI Videos —
1
Really quick now, I’m just going to say this right here before we go on to the next half
2
of this section. Now we’ve done about music and audio. That was great. So good, wasn’t
3
it? I’m obsessed with Suno. Now I’m going to go on to showing you how to create voice
4
dialogue, voiceovers or audio for lip syncing and even cloning. Now the next lecture is
5
about ethics and legality. And I just want to reiterate what I’m about to tell in the
6
next lecture. This next bit is really exciting. You can bring your characters to life. You
7
can bring people to life with all different voices. You’re also able to clone yourself.
8
And if you were to, but it’d be very untoward, clone a celebrity or famous person’s voice.
9
I strongly suggest in this course, you do not do that. And while there are laws protecting
10
that from happening and also the tools you’re going to see yourself, it’s forever changing.
11
And the laws and ethics I’m about to show in the next section are as of today’s date,
12
but these are going to be changing forever. And as AI becomes more commonplace, they will
13
continue to evolve. And given the place in which you’re watching this from anywhere around
14
the world, the laws are going to be different from where you are distributing that to where
15
you are recording. Please bear in mind, do not try to just do this to spoof a celeb and
16
do his voice and make something untoward, even be it comical or whatever it is you’re
17
trying to do. I would fully suggest in this course recommends you stay away from that.
18
That being said, let’s go on to the next one. I’m going to talk about the legality of it.
19
So you know where you stand before you start doing this. And then let’s go and create some
20
fun voices. All right, let’s go to the next lecture.
— Voice Cloning: Law and Ethics Explained —
1
So before I continue on to the next section, I think it’s really important that I just
2
go over some laws and ethics because what I’m about to teach you will allow you to clone
3
voices and therefore an integral part of deep faking, which no one I think agrees with if
4
it is used in a negative way.
5
So let me just go over just quickly some of the laws, ethics and examples of deep fakes
6
and what’s out there and what I think you should be concentrating on.
7
Now the law is obviously going to change as this develops, is going to be changing weekly,
8
monthly, yearly, and also in the place where you live, where you are located.
9
I’m just going to go over kind of the law currently that is generalized in the US West
10
Voice cloning through artificial intelligence intersects with various legal domains, including
11
intellectual property, privacy, rights, and publicity.
12
Legally using someone’s voice without consent can fringe upon their rights to control their
13
likeness and personal attributes.
14
For instance, Tennessee’s Ensuring Likeness, Voice and Image Security, Elvis, Act explicitly
15
protects individuals’ voices from unauthorized AI replication, imposing civil liabilities on violators.
16
So you are in control if you are a celebrity, famous person, even an individual in control
17
of your likeness, including your voice.
18
Quite apt that this is set in Tennessee where there is Nashville and this is actually called
19
Elvis, where people’s voices could now, I could replicate the voice of a famous country
20
singer, for example, in Nashville and record songs and music that sound just like them.
21
That’s not allowed.
22
That’s not going to be okay because that is the intellectual property and likeness of that person.
23
Now, let me give you some examples right now that’s available for you to see.
24
And if this is worrying for you, let me know what you think.
25
Let me just show you a couple of videos on YouTube of deep faking people’s voices and
26
then lip syncing.
27
But this is the audio section.
28
So let’s just concentrate on the audio here.
29
Second youngest son, he’s suggesting it came out of nowhere.
30
What we subsequently learned is it may have come from the former president or his legal
31
team acting in bad faith.
32
This is deep fake example of what is possible with powerful computer and editing.
33
It took around 72 hours to create this example from scratch using extremely powerful GPU.
34
It could be improved with more computing time, but 90% of people cannot tell the difference.
35
So that was an example of deep fake and there are many on here.
36
Yes, the lip sync is slightly out and I said it took 72 hours.
37
You could be manipulating, you could be cloning voices of people and then you could use that
38
in a malicious way.
39
That is illegal and ethically unsound.
40
Let me just show you a couple more examples because of course this could be used in a
41
political sphere.
42
We’re entering an era in which our enemies can make it look like anyone is saying anything
43
at any point in time.
44
Jordan Peele created this fake video of President Obama to demonstrate how easy it is to put
45
words in someone else’s mouth.
46
Moving forward, we need to be more vigilant with what we trust from the internet.
47
Not everyone bought it, but the technology behind such frauds.
48
Scary, isn’t it?
49
So I just want to draw your attention back over to the slide about the ethics on this.
50
Ethically, our voice cloning raises concerns about consent, authenticity, and potential misuse.
51
Replicating a person’s voice without permission can lead to identity theft, fraud, and spread
52
of misinformation.
53
To address these issues, AI developers are implementing measures to promote ethical use.
54
Companies like Eleven Labs have developed tools capable of detecting AI-generated voices
55
with high accuracy, aiding to the identification and prevention of deep fake audio.
56
And also a lot of platforms, I’m saying here, what are they doing about it?
57
They don’t allow you.
58
I will show you if, for example, I tried inside Eleven Labs to add in a lot of video footage
59
of, say, Donald Trump’s voice and then try to clone that voice, it would come up with
60
a warning saying, is this your voice?
61
We need to verify it.
62
And they would live verify through a recording of myself.
63
So you should only be able to deep fake, if you like, or clone your own voice or use
64
some of the existing clone models that they have, where people have allowed to use their voices.
65
Many of the software has this, luckily, thank goodness for the sake of ethics, this block
66
that doesn’t allow you to copy someone’s voice.
67
But there are, of course, rogue softwares out there.
68
I’m not aware of any.
69
I mean, I haven’t used any.
70
I’m showing you over the counter AI tools where you can do this and use this.
71
And for those Eleven Labs predominantly, you won’t be able to fake anyone’s voice, please.
72
As a rule, and what I’m promoting in this course, only use these tools to either clone
73
your own voice or use someone else’s voice that you have permission for to create artistic
74
and creative videos.
75
This is not intended for deep fake or misuse.
76
And it’s already illegal, but we’re going to see a huge, a huge uptake on legal ramifications
77
for people doing this. Please don’t.
78
So there are two ways in which you can generate audio, text to speech and speech to speech in Eleven Labs.
79
Now we’ve gone over the ethics, let’s go into Eleven Labs, and I’m going to teach you how
80
to use this amazing piece of software.
— ElevenLabs: Text-to-Speech Simplified —
1
So let’s get creative now and actually start to make some voiceover narration. Let’s make
2
some speech shall we using AI and I’m going to be using for this 11 labs now like I’ve
3
mentioned 11 labs is a time of recording this pretty much a market leader when it comes
4
to this stuff there is a free version of this you can go and test it and then of course
5
there are paid versions to allow yourself more generations and such like I’ll go over
6
those at another time but I want to make you aware of this and what this site’s like and
7
then we’re going to actually do some text to speech in this tutorial so when you come
8
to the page it looks like this 11 labs.io there’s the top part here create this is what
9
we’re going to concentrate on today then there’s workflow with helping you if you were doing
10
things like dubbing or voiceovers on a piece we don’t have to worry about this right now
11
these top four we’re going to go over in this course predominantly now sound effects like
12
I mentioned we’ll use this in post-production when we already have everything put together
13
I can already imagine that if I am of course doing my project piece about the bomb in Japan
14
and Pearl Harbor bombing that I’m going to need sound effects of things like that and
15
I can generate those here inside 11 labs voices are all the voices we can use and also
16
some cloning and then there’s a voice changer and text to speech so let me show you one
17
of these now we’re going to start with text to speech which is the easiest but gives you
18
the less less control as opposed to speech to speech but we do that in the next lecture
19
let me remove this so I’m going to go over to chat GPT and I’ve just asked it to generate
20
a simple 30-second narration of an intro of a story a modern take of Tom Sawyer so
21
I’ve got myself some nice text right here and it is as if it’s the beginning of a narration
22
of a video a modern-day version of Tom Sawyer with a buzz of smartphones on the river still
23
rose on Tom Sawyer mischievous kid of endless curiosity and a knack for trouble in a world
24
of tick tocks and text messages Tom’s adventures are anything but digital okay so if I just
25
paste that into here now what I like to do and every time you generate by the way when
26
I click generate it’s going to have a different result just slightly every time we do it with
27
regards pauses punctuations and things but I can do my best to on the banks of the Mississippi
28
where smartphone buzz move lives Tom Sawyer and mischievous kid of endless curiosity knack
29
for trouble I’m gonna give myself a couple of spaces there anything digital he’s a dreamer
30
rule-breaker wild scheme when he and I’m gonna give myself breaks here now of course in the edit
31
I could cut these all into pieces but this will just allow me to have somewhat of a small amount
32
of break you can also do things like dot dot dot we’ll see how that comes out and aid with pausing
33
here so now with this text of speech I have my text that I want them to to speak here this is
34
stability and similarity this is matching over to the voice I have down here selected how much
35
of that do I want to match here now let me go to my users here I can go find more voices this is
36
where I have my whole catalog of voices that are available inside 11 labs I can decide how I want
37
my voiceover to sound based on who the character is then the rater and such like I can filter this
38
down by trending voices or latest I can also go by language accent so I like to do this I’ll go
39
English I want my accent to be American I can then filter this further and go gender I think I want
40
a male voice for this and an old male as if he’s telling a story and then I can start listening to
41
some of these let’s take a little listen genius is 1% inspiration and 99% perspiration that’s quite
42
nice for this I think okay mr. storyteller the Sun rises in the East and sets in the West yeah
43
okay love is like the wind you can’t see it but you can feel it all right in seed time learn in
44
harvest teach and winter enjoy I quite like Morgan genius is 1% inspiration and 99% perspiration I
45
think I like Tom the most let me just say this man sees in the world what he carries in his heart
46
you can go through and you can find what you want here I think I already have inside my voice I’ve
47
generated already somebody that I wanted to use let me test Ben right here if one is lucky a
48
solitary fantasy can totally transform oh no it wasn’t Ben because he’s English accent let me
49
keep going I did not have any illegal I failed over and over and over again in my life okay this
50
who I want to use Carter okay Carter the mountain guy that kind of gruff voice on here and I can go
51
on the stability so the more variable it is under 30% may lead to instability and I can make it more
52
and more stable and I can move up the similarity right here in the same way based on the voice
53
that I want to have right there so I can go generate speech now on the banks of the Mississippi
54
where smartphones buzz but the river still rolls on lives Tom Sawyer a mischievous kid with an
55
endless curiosity and a knack for trouble in a world of tick tocks and text messages Tom’s
56
adventures are anything but digital he’s a dreamer a rule breaker and every day’s an opportunity for
57
a wild scheme but when he stumbles upon a secret hidden in the heart of his hometown Tom’s carefree
58
days are about to take a serious turn okay nice so that was great so I can then choose the download
59
that or once again I could just alter these slightly I could regenerate speech and it’s
60
just like the banks of the Mississippi where smartphones buzz but the river still rolls on
61
lives Tom Sawyer a mischievous kid with an endless curiosity okay so that feels like well I’ve moved
62
down the similarity the stability is slightly less it’s a little bit faster perhaps even more
63
conversational I quite like that and you can just go along and play scrolling through here
64
for the stability in similarity and you can also make sure you go through and find the voice that
65
you want now I just want to show you if you were trying to create your own voices here if I go on
66
to voice right there and I can go add a new voice okay and you have the options right here so if I
67
wanted to clone myself which I will show you on this clone yourself you can go on to instant clone
68
voice libraries on here professional clone this is where you give more but right here if I uploaded
69
just two minutes of my own voice if I click on here I could name this myself I uploaded just
70
two minutes of my own voice then it would be able to accurately clone yourself which I’ve already
71
done so if I’m speaking if I go back to my text voice right here I could actually choose myself
72
here if I generate speech this should sound like I’m saying this on the banks of the Mississippi
73
where smartphones buzz but the river still rolls on lives Tom Sawyer a mischievous kid with an
74
endless curiosity and a knack for trouble that’s scary that that was done with about two minutes
75
worth of audio so if you have a video of yourself where you’re speaking or just speak for two
76
minutes and then upload that you could clone yourself it’s very very scary and once again to
77
go back to the last text the last lecture about ethics you could try to clone someone else’s voice
78
but it will pop up with the warning actually so I show you that and we can try and see if it see if
79
it does that actually I’m gonna save that for in two lectures time so what I’ve done here is I have
80
just got this speech by Donald Trump just a few minutes long I’ve downloaded it using a downloader
81
and what I’m going to do in two lectures time I’m going to show you next we’re going to go speech
82
speech now we don’t text the speech I’m going to show you a voice cloning a whole nature on that
83
and we can try and break it and see if we can voice clone Donald Trump which we shouldn’t be
84
allowed to do and how to voice clone yourself actually step-by-step let’s do that in case you
85
want to do that or another voice that you are allowed to use so that was text-to-speech please
86
go along a play with all the different ones that you want I’ll show you another a couple of different
87
tools here feel more or less etc later on but play with the stability similarity and the different
88
kind of voice that you want lots and lots of fun and also you saw that there was a small pause
89
after this putting in some gaps here play with that it’s really really great and how real does
90
it sound we’ve come a long way with AI voiceovers in the last few years they used to be terrible
91
and sound like computers and now they’re so good all right I’m going to show you speech to text
92
this is text-to-speech I’m going to show you speech the text in the next section next lecture
93
and then we’re going to get on to some cloning some exciting stuff coming up
— ElevenLabs: Speech-to-Text for AI Videos —
1
So moving on to another way to get great sound and this I think is even better than using
2
text-to-speech because you are completely in control of the nuances and the mannerisms
3
if you like in a voice’s tone or the way it is projected, the way certain words are emphasised
4
for example, so it is way more human when you use this and you don’t need to be an amazing
5
actor, don’t be worried I’m not a voice-over actor myself, you’ll be able to do this and
6
you also don’t need great sound quality, you don’t need to have an expensive microphone
7
like you’ve seen in my other shots, you can just be using the audio from your laptop and
8
it works just as well, they’ve enabled that.
9
So let me show you how to do this, if I go to voice change up here, so before we’re on
10
text-to-speech, if I go to voice changer and then I can either do one of two things,
11
I could upload audio, so if you’ve recorded this on a voice note on your phone and put
12
onto your laptop or if you have audio already for whatever reason, you can upload the audio
13
here or I can record the audio.
14
Now this is the much easier way to do it, so if I toggle here record audio, it’s just
15
going to take this as soon as I record this and record me speaking.
16
Now once again, you do not need amazing quality sound, it is just taking the voice, the word
17
you are saying and the way you are saying them and then it’s going to convert it into
18
the voice we’re using, in this case Ben or whichever one I use and it will change it
19
and it will use that, it doesn’t matter about the actual quality.
20
So if I just, I’m going to actually just speak this, if I read this in a different way, you’ll
21
see exactly how it can change it.
22
So hit record and then I just simply read in the way I want to.
23
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom
24
Sawyer, a mischievous kid with an endless curiosity and knack for trouble.
25
In a world of TikToks and text messages, Tom’s adventures are anything but digital.
26
I’m going to just stop that and just use that right here, I can play this back and we can have a listen.
27
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom Sawyer.
28
So I’ve definitely said that on purposely, intentionally as I was recording that, different
29
from a standard reading like we heard in text-to-speech in the last lecture.
30
Now if I go back onto here and I add this to Carter the Mountain King, I could then
31
go up to stability, similarity, high and I could have Carter in his deep American voice
32
say this in the same way that I said each of my word.
33
Let’s generate this.
34
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom
35
Sawyer, a mischievous kid with an endless curiosity and knack for trouble.
36
In a world of TikToks and text messages.
37
So okay, you’ve got me now with my same kind of British accent he’s got on there the way
38
I’m saying it, but it’s got a far more deeper tone of voice.
39
I could also move up the similarity and make it, let’s make it less stable right here.
40
Let’s regenerate that and have a little listen if I up the similarity.
41
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom
42
Sawyer, a mischievous kid with an endless curiosity and knack for trouble. Nice.
43
All right, let’s change this completely.
44
Let’s change this to Jessica.
45
I want to change genders.
46
Let’s make similarity high and the stability right somewhere in the middle.
47
Let’s generate that and hear her speak as I do.
48
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom
49
Sawyer, a mischievous kid with an endless curiosity and knack for trouble.
50
In a world of TikToks and text messages, Tom’s adventures.
51
Now where this obviously has a huge advantage in text to speech, we were very much left
52
down to the way it was interpreted through punctuation.
53
On the banks of the Mississippi, where smartphone buzz but the river still rose on, lives Tom Sawyer.
54
Which I think has slightly more of a AI feel to it still.
55
It doesn’t feel like a real person, whereas if I’m using voice changer, as we showed with
56
the example, it has far more of a human feel to it.
57
So I would always, and I’m going to later in this section, use your own voice.
58
You can see how you don’t need to be an actor.
59
You’re just speaking it in the way you want to speak it.
60
And that’s fine.
61
Now, accent sometimes can be a thing.
62
If I’m speaking in a British accent and I choose an American, they can still sound somewhat British.
63
I’ve tested this and I cannot do accents very well.
64
If I just do somewhat of an American accent while I say it, then it definitely sounds
65
American as it said.
66
But this still sounded fairly American and you could up the similarity and play around
67
with the stability.
68
So that was voice changer, which I really, really am a fan of.
69
It’s the best way to get the most high quality version of a voiceover for your AI video.
70
So the last thing I want to do in the next one is we’re going to clone a voice.
71
I started teasing it in the last lecture and got ahead of myself.
72
I was excited about this tool.
73
I really love it.
74
So we’re going to voice clone in the next one.
75
We’re going to try and break it so that we can clone something we’re not supposed to,
76
which should pop up ethically with a block on here.
77
And then I can clone my own voice, which will allow you to then forever, if you were doing
78
voiceover or documentaries or video essays on YouTube, you would never have to record anything again.
79
Doesn’t that sound amazing?
80
So let’s move on to the next lecture and let’s clone some voices.
— ElevenLabs: Precision Voice Cloning —
1
So let’s clone some voices. This is an exciting one right here. If I come over to voices right
2
here on the left hand side, I can go add a new voice. And this is where it can pop up
3
and you can go instant clone a professional clone. This is what you want here professional
4
clone you could do. I’ve already done my one and on the package I’m on, I’ve reached my
5
limit that I’m allowed one. This is where you would if you were trying to clone your
6
own voice. For example, I used it where I cloned my voice successfully. I gave it up
7
to 30 minutes of clean audio. That’s a lot. And I was then able to make video documentaries
8
and voiceovers where I didn’t need to actually be present. And what I used was text to speech.
9
And it was so good, the clone that I was able to use myself to be able to speak this
10
like this. You’ll also hear me right now speak artificially on the banks of the Mississippi
11
where smartphones buzz, but the river still rolls on. Scary, very scary. Okay, so let’s
12
go back to voices, add a new voice. I’m going to teach you how to clone. So I said we’re
13
going to do two things here. One, I’m going to try and break it and we’ll see the warning
14
come up. If I am successful in breaking and cloning a celebrity, someone I don’t have
15
permission to, I will of course, email and contact 11 allows have I done before and said,
16
hey, this is someone this is someone’s life that people could be messing with if they
17
use their voice. So I wanted them and then they’re going to use their know how to be
18
able to block that voice and ask people to verify it’s their voice should a similar voice
19
be generated. So what you’ll want is this instant voice clone right here. Now you’ll
20
keep this, it’ll come up, you can give it a name. So I’m going to try and clone Donald
21
Trump. Okay, we’ll first do what we shouldn’t be able to do. And what I’ve done is just
22
over here on YouTube, I found this three minute speech. I have then downloaded that three
23
minute speech. If I just open that up here, here it is. Let’s go to here. I’ll just play
24
a bit while fleeing far from the scene of the wreckage. The goal of cancel culture is
25
to make decent Americans live. Okay, so there it is. So I can just grab that that was free
26
obviously to grab that I can grab it and pop it just into here and it will start doing
27
the background work. It says one to 25 you can upload as many as you want but I actually
28
is pretty good 11 labs you can do it just on one or two even it doesn’t matter. Here
29
I’m going to lie and try and fake it and then I will let them know if it let me do this.
30
I’m not using this for any malice and you shouldn’t either. I’m going to show you the
31
warning to show you cannot do this should not do this. If I go to add voice and it will
32
automatically in the background start doing its thing to recognize the voice. So it instantly
33
started and then it says voice requires verification. We need to verify that the voice is yours.
34
So it does its job very very well. I need to verify the voice by doing so I would then
35
record my own voice and I would verify that way or it says here you can use a similar
36
voice if you want to but it’s opportunity doesn’t knock build a door. The best way out
37
is always through the best way out, but that’s good. Isn’t it? That’s in place to not allow
38
us to clone someone’s voice and then I could make a malice video for example or have him
39
say something he didn’t say inside 11 labs. I’m really glad that’s here and I’m showing
40
you that don’t do it and you can’t if you wanted to but don’t so what I want to do right
41
here is add my own voice if I go to add a new voice once again exactly the same way
42
I’m going to go instant voice clone. I’m going to call this me. I’m then going to drag some
43
audio of myself. Okay, just done that. There’s a 10 megabyte maximum. This is 5.7 megabytes,
44
which is enough. I can click that. Yes, this is me and it is me. I can add the voice and
45
it does that in the background and I’m going to clone my voice. So this is how you’d be
46
cloning your voice. If you want to use this once again, where would you use this? Now
47
if I was creating let’s say I was someone like Magnus media now Magnus media or Jake
48
Tran or someone like this makes a video essays about all kinds of things from I don’t know
49
the truth about PayPal. Mr. B’s Ikea, etc. makes these video essays that has his voice
50
over throughout. He’s never on screen. He could clone his own voice and he could then
51
use AI even you use chat GPT to write his scripts. He could then paste his script into
52
here so I could actually do that right now. We could actually give it a give it a try
53
and go write me a five minute script about the dark truth of Coca Cola. So let’s wait
54
for that to generate. So if I just grab maybe grab this first section, I would obviously
55
grab the whole the whole essay here. I can go back into this go into text to speech.
56
I could then paste that onto here. We’ve just generated me. So let me find me. Here
57
I am. I can go. I want it very similar to me. Let’s generate this Coca Cola, the world’s
58
most iconic drink. It’s everywhere from crowded stadiums to tiny convenience stores in the
59
middle of nowhere. But what lies beneath the familiar red label today, we uncover the dark.
60
So you don’t need to that was only doing it with about two minutes worth of audio. I would
61
of course upload 10 plus things when we did this on voices where we added a new voice
62
and we instant clone. You can add up to 25 samples. The more you give it, the better,
63
obviously. And that was done in just about one minute with one clip. And then you could
64
clone yourself. You can make it even better if you obviously gave it more and more data.
65
So that was really good, wasn’t it? That was fun to do. But make sure you use this tool
66
ethically. OK, now, 11 Labs is, I think, the market leader in this area. But there
67
are some other tools that are trying to make way with this, especially Fillmore. I showed
68
you some of the stuff earlier in the in this section that was really good with AI audio.
69
There’s some more. There’s Artlist, etc. I’m going to show you some other tools of note
70
that you may want to play with if you don’t want to use 11 Labs, because if I go into
71
the pricing for this, you can see that this costs. There’s free and you’re allowed this
72
many credits. There’s five dollars a month and there’s twenty two dollars a month. This
73
is the one I’m on right here. And there’s lots of different plans. If you wanted a different
74
price point, different package, then I’m going to show you some other tools available here.
75
Also, Fillmore has a really good one for trailers if you are making trailers. OK, let’s talk
76
about that on the next lecture.
— Additional AI Audio Tools for Creators —
1
So 11 labs is obviously not the only tool around for AI voiceovers. There are lots and
2
if you just search AI voiceover, actually these two art list which has adverts everywhere
3
right now you’ve probably seen them if you’ve been on YouTube and looked at anything AI
4
and also feel more that I’ve shown you and there are some other ones motion array I’ve
5
used those a lot for templates for editing and then 11 labs is here but there are loads
6
and loads everyone is doing it but these two seem to be the highest tools of note that
7
I want to show you and talk to you about but there are many more and I will update this
8
lecture as more come along if anyone takes over 11 labs as market leader I will also
9
do lectures on that but if you’re not interested in using 11 labs I’m going to show you so
10
art list I’ve shown you briefly some of this let me just show you some voiceover stuff
11
here done the voiceover here if I was this paste in I’ve just got an introduction here
12
to Star Wars and I could then choose exactly the same way all the different voiceover things
13
let’s generate this and let’s see what it does just of note I’ve got here 400 characters
14
basically one image one character so 160160 showing right now let me show you like this
15
that’s for the free trial to check this out and then you can subscribe let’s listen to
16
this in rural in a galaxy far far away an epic battle unfolds between light and dark
17
where rebels rise against an evil Empire’s relentless so that’s doing like a rural like
18
country American guy old guy speaking it’s actually really good I really like their tone
19
they’re using it and it’s kind of very realistic and you can go into the voice catalog and
20
there are loads and loads on here and you can check this out let me actually let me
21
just play one right here there’s nothing quite like the thrill of combining handmade art
22
with modern technology that would be a great one for an advert wouldn’t it so the other
23
one I want to show you is filmora Wondershare filmora which I use quite a bit and have suggested
24
to you for editing showing you earlier in this section let me just grab let me just
25
grab filmora and I want to show you some of that so here inside the platform I’ve got text right
26
here and I’ve got the exact same in the galaxy far far away let’s do text-to-speech and they’ve
27
got lots of different options on here also in these are ever ever growing I could just play
28
some of this Wondershare creativity simplified well that was fast on them and then there are
29
some other ones Wondershare creativity simplified great but my favorite one and I’ve used this a lot
30
right here let me show you for trailers here movie trailer adult deep Wondershare creativity
31
simpler I think filmora this it has the best version for this kind of voice that you want
32
this trailer American trailer voice let me click generate and it’s just going to transcript this
33
it’s very fast really fast and you can have a listen to it I’ll show in real time actually I
34
won’t skip forward already completed that’s done let’s have a listen to this in a galaxy far far
35
away an epic battle unfolds between light and dark where rebels rise against an evil Empire’s
36
relentless grip amidst the stars unlikely heroes emerge destined to shape the fate of worlds and
37
awaken the power within okay so that’s really good that’s probably of all tools even 11 labs
38
that’s one of my favorite voices I think for using for that style of video if you need it
39
just make you aware of the other tools that are available outside of 11 labs now I’m going to
40
move on to the next section I’m going to create we’ve been following along with my course project
41
if you’ve been following along with this the video we’re creating from scratch start to finish and
42
then I’ll send that to festival I’m going to then create both in two lectures one we’re going to do
43
the music and the score and the sound for this and then I’m also going to do any narration that’s
44
needed any narrator’s voice or character voices we’re going to do that also in the next two
45
lectures let’s do that
— Course Project: Custom AI Music Creation —
1
Our project we’ve been working on throughout this course all the way from beginning to
2
end was this and so far we generated an idea for this and then I generated a script.
3
This is my Amy Under The Changing Sky working title, it’ll probably be called something
4
different, which is about two corresponding overlapping stories with two young girls
5
called Amy and Amy, one in Pearl Harbor, one in Japan, and how they are also the victims
6
of war, losing their fathers really simultaneously in the story, although they’re obviously years apart.
7
So that was our story and this was a script that we generated in ChatGPT and also in some
8
other tools that we use.
9
Really good section actually about the idea generation, I really love Claude, you should
10
go and check that out.
11
And then it’s got a list of shots here for me, which we’ll be using in the next section.
12
So what I want to do is create some music for this.
13
In this lecture, I’m going to create the music, the scores for this, and then the one after
14
that I’m going to create some of the voiceover part, I’ve got a narrator and I’ve got a couple
15
of lines by some characters.
16
So I’ll be doing that in these next two lectures.
17
So first off, I’m going to use Suno and Udio side by side because I’ve got 50 free credits.
18
That’s probably 10, that is 10 song generations in Suno and then Udio, I’ve also got some
19
free generations.
20
I’m not actually signed up for any of these two, I’ve never needed to need that much music.
21
So maybe the same for you guys.
22
So I’m going to create instrumental music.
23
Remember we click this.
24
And what I want is, and for scores, if we’re going to store this along, I want more than enough music.
25
This whole video is only going to be two, three minutes long, but I want 10 kind of
26
five minimum songs of varying different kinds.
27
So I’m going to first, I’m imagining my first scene right here, when in the quiet hours
28
of dawn, two young girls on opposite sides of the world begin their day.
29
So it’s a wide shot, they’re in the rooms, in the rooms.
30
But one is in the USA and one is in Japan.
31
So I’m going to, my idea is to generate both a Western American style and one with Japanese
32
influence and songs with both influence in here.
33
So I want instrumental, I’ve already told it instrumental.
34
I want a cinematic, emotional soundtrack for background music in a movie, have a mix
35
of USA, Western and Japanese, Asian influence.
36
And let’s see what this generates.
37
While that’s generating, I’m going to go over and actually I’m going to copy this almost
38
entirely right here and put it in right here for this.
39
I want it to auto-generate instrumental, yes.
40
Let’s go to some advanced creation tools here.
41
Do I want it to be atmospheric is probably what I would like the most or emotional music.
42
I want it to be 30, actually 32 seconds, pretty good as a quarter of this script or maybe
43
one sixth of this script.
44
So that’s only a couple of credits.
45
Let’s generate that on here and let’s go back to Su-No and start listening.
46
So Echoes Across the Horizon, actually a pretty good name for the title of this also.
47
Let’s play this and have a little listen.
48
Okay, that is quite nice and it works and it’s kind of has enough emotional pull behind it.
49
Let’s listen to the other one.
50
Well, I quite like this intro here and I can imagine the young girls, they’re both drawing
51
in their homes and does it up the tempo, which we could be using for when their fathers
52
go off to work or are attacked.
53
Okay, they sound really nice.
54
Let’s go over to Udio and let’s listen to what their creations are also called it Crossing Horizons.
55
This is quite good.
56
Maybe it’s a little too fast there, but that first half was pretty good.
57
I mean for our story, but we could go in and we could be changing this some advanced features right here.
58
I could be changing the song structure and anything else like that.
59
And also there’s how fast you’re going to generate this, which is pretty good on here,
60
but I’m just a bit more of a fan of Su-No.
61
So let me do this again.
62
So I’m going to actually just do this and go cinematic background music inspired by Pearl Harbor.
63
And let’s see what actually comes back.
64
If I put in the actual event that we’re talking about and have a little listen here, Echoes
65
Okay, let’s continue to do this and I’m going to just create as many as I can.
66
So cinematic background music inspired by Japanese themes.
67
Let’s just keep it.
68
I’m just using that doesn’t make sense in English, obviously, but I’m just giving it
69
as much Japanese themes inspired background music.
70
Let’s create this.
71
We know from the tutorial that Su-No is automatically changing this behind the scenes to
72
better match its own tools needed for generation anyway.
73
So let’s see what comes back.
74
If I ask it to be Japanese themed, we should be hearing something instrumental that reminds
75
us of East Asia.
76
And let’s have a little listen.
77
So haiku of the heart. Okay.
78
Okay, somewhat getting there. We have those.
79
I do quite like those high piano tones on this.
80
But am I getting a real Asian feel for it, not so much.
81
So cinematic background music.
82
I’m going to say East Asian oriental, let’s say instruments or instruments, and let’s
83
see what that generates.
84
I’m always going to do the same thing while we’re here in video. Let’s do that.
85
Okay, let’s get back over to Su-No. Moonlit Dragon.
86
Okay, so here I’m really getting a bit more of a feel for what we would typically, perhaps
87
stereotypically, and I want to make sure we don’t go down that road.
88
It’s more of an East Asian, Japanese theme.
89
Let’s listen to the next track.
90
Sounds like a shillango type feel, doesn’t it, stringed string, or harp, probably a harp playing,
91
right? Kind of guitar.
92
I’m not a musician, but.
93
Let’s see what Udo has here.
94
Whispers of the Orient.
95
I’m not too sure about the first bit, but horn type part that’s coming in now is quite nice. Oh, this is.
96
All right, really nice.
97
Finally, so I’m going to use my last two credits right here. Let’s do.
98
I want to just get Calm Background Music for a Scene Intro, USA themed music.
99
I’m going to give it that.
100
That’s all I’m going to give it.
101
That nothing else.
102
I’m telling it instrument.
103
I’m telling it music style. Nothing.
104
I want to see what it generates.
105
Now, of course, I’m going to use my last two.
106
So tomorrow I’m going to get another 10 songs worth for free, or I could then be upgrading this.
107
But I want just loads and loads of these tracks.
108
I’m downloading them, change them, and I’m going to mix them together.
109
I probably won’t have one track throughout.
110
I might mix between these Western style music and Oriental style as the scenes change from
111
one to the other, or perhaps a mismatch merge track that it comes up with.
112
Tomorrow, when I come to generate again, I’ll ask it to make a track where it constantly
113
changes between very Oriental sound in music to Western and back again and back again,
114
and use those to dictate my scene changes, perhaps, which is why it’s important to do
115
your music first, because it will dictate the visual images that I’m creating, of course.
116
So let’s listen to Across the Plains.
117
Instrumental, calming, ambient.
118
It is definitely that, because I put calm background music.
119
It’s almost like the music you get in the background of a massage parlor or something
120
like that, in a spa.
121
Definitely could see that for sure.
122
I’m going to download all 10 of those, and I’m going to start keeping myself a catalogue of music.
123
I’ll definitely be using some of, if not all of those, and I’ll be regenerating these every
124
single day in exactly the same way, with still no audio, until I have loads.
125
And then I’m going to start putting stuff down on my timeline.
126
Now, it doesn’t mean I might not change this towards the end slightly, or changes, but
127
when we come to generate our images, and when we lay it down on the timeline, I would already
128
want some music down there so I can edit to the pace of the music for sure.
129
But we can change that, everything.
130
We can change everything.
131
We can change images.
132
We can change a whole script and story at any time.
133
Just for best practice, I want to have this first.
134
So that’s me for the project, downloading and creating the music I want.
135
Now, in the script, I can see that I have some, and again, this may change the exact
136
audio that I’ve got right here, as in what they’re actually saying.
137
If I even use a narrator at all, and there is a little line from the father right here
138
in the story, I want to generate that with some voiceovers for this project.
139
So let’s do that in the next lecture.
— Course Project: Crafting Voiceovers with AI —
1
The last part of the course project I
2
want to do as we follow along and
3
go through this throughout this course step-by
4
-step.
5
We did the music with Suno and Udo
6
in the last lecture, now I’m going to
7
generate some voice.
8
So I have just some parts and again
9
this might change and can evolve and if
10
I decide to change this I will just
11
go back and regenerate this but I want
12
something in my timeline now so I can
13
see when I come to edit if it’s
14
working or not and then I can change
15
it until I put it in my edit
16
and see it I can’t.
17
So I want to just copy over first
18
we’re going to do the narrator.
19
So inside 11labs first I’m going to do
20
text-to-speech let’s do that one and
21
let me just go back and forward and
22
copy everything from my narrator.
23
So I’ve copied everything over into here this
24
is everything on my script that was from
25
the narrator I just put it into a
26
section here I’ll need that and you will
27
if you follow along in a moment.
28
So the first thing I want to do
29
is think to myself and perhaps you already
30
have an idea or perhaps you want to
31
generate several what kind of voice do I
32
want here?
33
I want American, male, gentle but strong enough
34
to hold a story and a narrator.
35
So let’s just go through I already have
36
some on here let me just play with
37
some of these like this is not going
38
to work.
39
The people who are crazy enough to think
40
they can change the world are the ones
41
who do.
42
Okay let’s actually not as bad as I
43
thought and that’s why sometimes playing with things
44
is good.
45
So let me do that again and let
46
me go Adam he’s a late-night British
47
radio host I never thought of having a
48
British accent with this.
49
Let’s see what this sounds like.
50
In the quiet hours of dawn two young
51
girls on opposite sides of the world begin
52
their day like any other.
53
Innocence in the form of tiny hands.
54
So it has less emotion than perhaps I
55
would like it to have but we can
56
fix that of course when I do I
57
can do the speech to text and I
58
can do that.
59
Let’s keep Adam Stone let’s keep his voice
60
on a back burner I hadn’t thought about
61
it and perhaps it’s slightly more bedtime story
62
than it is narrating this but let’s have
63
a listen what if I do Carter the
64
Mountain King again and generate the speech.
65
This is going to be pretty intense I
66
think perhaps too intense.
67
In the quiet hours of dawn two young
68
girls on opposite sides of the world begin
69
their day like any other.
70
Innocence in the form of tiny hands captures
71
what they know.
72
It’s a bit American cheap movie it sounds
73
like what do you call those in America
74
Hallmark movie kind of thing not so much
75
the emotion I’m trying to convey here.
76
So let’s keep going I’ve got a few
77
more here this one’s almost like a presidential
78
type.
79
In the quiet hours of dawn two young
80
girls on opposite sides of the world begin
81
their day like any other.
82
Innocence in the form of tiny hands captures
83
what they know.
84
It’s a bit American cheap movie I’m trying
85
to convey here.
86
So let’s keep going I’ve got a few
87
more here this one’s almost like a presidential
88
type The ice is only what the mind
89
is prepared to comprehend.
90
The thing always happens that you really believe
91
in and the belief in a life without
92
love is like a tree without blossoms or
93
as we are liberated from our own fear
94
our present government of the people by the
95
people for the people shall not perish from
96
the earth.
97
Actually perhaps I quite like a female voice
98
for this.
99
Let me choose Charlotte.
100
Let me generate this speech and listen to
101
this.
102
This is why playing with it and going
103
for all these different voices is crucial.
104
In the quiet hours of dawn, two young
105
girls on opposite sides of the world begin
106
their day like any other.
107
Innocence in the form of tiny hands captures
108
what they know.
109
Yeah, maybe because it’s a story about two
110
young girls and I want it to be
111
gentle and have as much impact.
112
That’s actually really, really good.
113
Okay.
114
I’m going to, one, I’m going to do
115
this twice now.
116
I’m going to use Charlotte.
117
Um, I’m going to have, this is text
118
to speech.
119
I’m just going to click download here and
120
it’s going to download that.
121
And then I want Charlotte again, but I
122
want voice changer.
123
So instead I’m going to record myself speaking
124
this.
125
That’s why I put this down here.
126
This is everything in the narrator.
127
I’ve already got a British accent, so I
128
don’t have to fake an accent for this,
129
but you can see me, uh, speak this.
130
I’m just going to put this on the,
131
on my other screen, see me speak this.
132
And then we can see how that came
133
out.
134
Okay.
135
Let’s record the audio as I read this
136
with a more human tone.
137
Obviously in the quiet hours of dawn, two
138
young girls on opposite sides of the world
139
begin their day like any other innocence in
140
the form of tiny hands captures what they
141
know, family, peace and love.
142
But as dawn stretches across the ocean, something
143
stirs reaching towards each child from the farthest
144
corners of the world under the changing sky,
145
two hearts beat as one connected by a
146
silent understanding across vast oceans beneath a sky
147
forever changed their spirits meet two children bound
148
by a world yet they cannot comprehend and
149
a hope they unknowingly carry.
150
So that was me recording.
151
I had to do that last bit a
152
few times on the speech.
153
Let me grab that back over and just
154
put this back up here.
155
Uh, so I have done that and this
156
is my recording.
157
Let’s generate that in Charlotte’s voice and see
158
how that sounds in the quiet hours of
159
dawn, two young girls on opposite sides of
160
the world begin their day like any other
161
innocence in the form of, okay, let me
162
just change that.
163
I’m going to go similarity and move this
164
ability like here.
165
Let me regenerate that and have another listen
166
in the quiet hours of dawn, two young
167
girls on opposite sides of the world begin
168
their day like any other.
169
Okay.
170
You can already see when I said that
171
like any other, that’s a human expression that
172
the AI voice just couldn’t do.
173
I’m going to exaggerate the star right here
174
on it and move down the similarity.
175
Uh, this style exaggeration, the style of Charlotte
176
similarity to my voice.
177
Let me regenerate that in the quiet hours
178
of dawn, two young girls on opposite sides
179
of the world begin their day like any
180
other innocence in the form of tiny hands,
181
captures what they know, family, peace and love.
182
But as dawn stretches across the ocean, something
183
stirs reaching towards each child.
184
I really like that where I’m doing speech
185
to text, you just have those human nuances
186
in the voice, the mannerisms slightly when you’re
187
speaking that just isn’t quite captured with AI
188
text to voice as opposed to a speech,
189
sorry, text to speech as opposed to speech
190
to speech.
191
So I’m going to download that.
192
And now I’ve stored both of those.
193
Now, the only other thing that I had
194
in here were two lines, one line each
195
by their fathers.
196
So I want to choose an American man’s
197
voice right here.
198
Um, and for that, I might actually, it’s
199
such a short line that perhaps text to
200
speech might suffice, but it’s completely up to
201
you.
202
You could also do this yourself against speech
203
to text.
204
Let’s do this.
205
And now I actually want to try the
206
Oh yeah, the presidential.
207
Let’s try that one and generate this.
208
Make the most of today, Amy, I’ll see
209
you this evening.
210
Yeah, that was very wooden because it’s that
211
presidential type speech.
212
Okay, let’s do Carter, the Mountain King again.
213
Make the most of today, Amy, I’ll see
214
you this evening.
215
That would be a very deep voiced father.
216
Let’s do I want to keep going.
217
Make the most of today, Amy, I’ll see
218
you this evening.
219
Yeah, that’s quite a nice voice.
220
Maybe that one.
221
But I might do speech to text because
222
they’re just coming out.
223
So wooden.
224
Let’s have a listen to the thing always
225
happens that you really believe in.
226
And the belief in a thing makes it
227
happen.
228
Make the most of today, Amy, I’ll see
229
you this evening.
230
Okay, that’s nice.
231
I like Brian, but I’m going to do
232
a voice changer.
233
So the line was, make the most of
234
today, Amy, I’ll see you this evening.
235
Let’s record audio.
236
Make the most of today, Amy, I’ll see
237
you this evening.
238
Now that may be too British for that.
239
I don’t have to fake an accent here.
240
Make the most of today, Amy, I’ll see
241
you this evening.
242
I’m going to do style exaggeration, which is
243
the style of that and the similarity to
244
my voice lower.
245
Make the most of today, Amy, I’ll see
246
you this evening.
247
Well, that’s quite nice.
248
I still sound a little bit British.
249
So let me do this again.
250
Okay, worst American accent coming up.
251
Let’s give this a try.
252
Make the most of today, Amy, I’ll see
253
you this evening.
254
That might do.
255
Terrible.
256
I’m so sorry all the Americans listening.
257
Let’s generate speech.
258
Make the most of today, Amy, I’ll see
259
you this evening.
260
Yeah, so exactly that.
261
Exactly that.
262
So I want to copy that one.
263
Let’s download that.
264
Now I’ve got these stored.
265
Now the other one was, let me see
266
here.
267
Be good today, Amy, I’ll be home before
268
you know it.
269
Now for this, I’m going to do text
270
to speech.
271
I’m going to do something a little bit
272
different.
273
I’ve decided with the story.
274
Let me do this and let me just
275
go to Japanese and let me have a
276
little listen.
277
I quite like that.
278
Okay, Hinata.
279
Back to text to speech.
280
I want his name was Hinata.
281
Let’s have a listen.
282
This will be first, it’ll be trying to
283
generate the English language with a Japanese accent.
284
So I’m going to keep these actually as
285
they are.
286
Let’s generate this.
287
Be good today, Amy, I’ll be home before
288
you know it.
289
So I’ve got a choice now.
290
I can either keep it English and have
291
it like that with an accent like they
292
do in some movies.
293
I can download, keep that.
294
Oh, let me try this.
295
So if I just translate with Google Translate,
296
be good today, Amy, I’ll be back before
297
you know it.
298
I could paste this in right here.
299
Let’s go back to this, go back to
300
text to speech.
301
Let’s put that in here and let’s generate
302
speech in Japanese.
303
Be good today, Amy, I’ll be home before
304
you know it.
305
And I could do that and download that.
306
Is it accurate?
307
I’m not entirely sure.
308
I’d have to ask somebody who’s Japanese, but
309
it’s going to be pretty good.
310
Let me go to dubbing studio right here.
311
And if I just create a new dub
312
and I’m going to do this, I’m going
313
to call it Japanese dub one line.
314
The language is English and the target language
315
is Japanese.
316
Now I’m going to put in, if I
317
go to my downloads, there was the track.
318
Be good today, Amy, I’ll be home before
319
you know it.
320
There we go.
321
I’m going to take that, which is in
322
English.
323
I’m going to drag it right here.
324
Be good today, Amy, I’ll be home before
325
you know it.
326
OK, and let’s create the dub.
327
So now it’s just completed.
328
That didn’t take very long at all.
329
Let me click this and we can have
330
a little play and listen to it.
331
Source was English, go into Japanese.
332
Let’s have a listen.
333
So that sounds a lot more authentic and
334
a lot better.
335
So let’s download that.
336
And now I have options.
337
I quite like and it’s fun, isn’t it?
338
Because now it’s coming to life.
339
You’re really starting to visualize your story.
340
I would quite like that to come up
341
in Japanese with the dub underneath it that
342
says the line in text on speech.
343
So we have the real language.
344
There’s only one line in it.
345
Why not just do that in Japanese with
346
one bit of text needed on screen?
347
That would be amazing.
348
So now I’ve downloaded all of these.
349
What you want to do is start to
350
organize this.
351
So I’m just making a drive on my
352
Google Drive right here.
353
This is my AIVS course project.
354
In here I have my script.
355
Let’s add ourselves a new folder.
356
Let’s call this audio.
357
Let’s create that file management.
358
Very important inside here.
359
I’m going to put down all of my
360
downloads.
361
So here is everything that we’ve been creating.
362
All of these in here.
363
And once they’re inside, I could label these
364
much better, like some of my complete voiceover
365
narration, text to speech, some is speech to
366
speech.
367
Some is the Japanese line.
368
So if I was, for example, here, I
369
could open this one, have a listen.
370
In the quiet hours of dawn, two young
371
girls on opposite.
372
So that one is text to speech.
373
Not quite as good as the speech to
374
speech.
375
So I can go on and label these
376
down.
377
I could also download right now all of
378
these songs that I want to have on
379
here.
380
The audio from this and the audio for
381
this one.
382
And I can put these in here too.
383
Or I could have separated them into voiceovers
384
and music tracks.
385
But as long as you’re and however you
386
want to have these set out is completely
387
up to you.
388
But organize these, put these in here, because
389
in the next task, I’m going to ask
390
you to do this.
391
You’ll see the task for this project.
392
And this is the best way to start
393
organizing, because next we’re going to start making
394
some visuals in the next section.
395
Very exciting.
396
And you’re going to want to put all
397
these together in a place that you can
398
find them easily for editing.
— Task: Generate AI Music and Voices —
1
So we’ve come to the end of that section. I really hope you enjoyed it. It’s fascinating.
2
It’s the first time we’re getting really creative in one of the three main elements of making
3
a video. So obviously I have a task for you here now at the end to progress with your
4
own projects. Please follow along. Take a look at this. So the objective is to create
5
audio elements for your A.I. video project using the tools covered in this module. Obviously,
6
there’s 11 labs which I use predominantly, but also Suno, Udio and any of the other stuff
7
that you find. You can, of course, implement and let me know what it is that you find useful.
8
So instructions are number one, please generate A.I. music. This is a score for your track.
9
So use one of the music generation tools we discussed and create a track that complements
10
your video project. Ensure the music fits the mood style of your visuals. Remember,
11
this is about creating an emotion. So super important. You get that right. Don’t just
12
generate one and think that’s fine. Generate lots of them. Now, number two, create narration
13
and dialogue. Your video might not have dialogue as such. It might have just narration. It
14
might not have any narration. You might just have music. You might have done a music style
15
video that just has lyrics. You might not need this at all. But if you are having it
16
in your project or please do some just to familiarize yourself. Make sure you do text
17
to speech and compare that directly with speech to speech, using your own mannerisms and changing
18
the voice on that, just like you saw me do. And keep these, save them and prepare. Ensure
19
both your A.I. generated music and narration are ready for integration into your video
20
project. Just starting to store stuff up now, your scripts, your ideas, your A.I. audio.
21
Now next, we’re going to start storing together our mood boards and things as we start getting
22
this together. So you’ll have by the end of this, the background and narration or even
23
the audio for your characters if you need this ready to start putting them in place.
24
So let’s go and work out what our videos are going to look like in the next couple of sections.
— Why Style Guides Matter for AI Videos —
1
Section 7 now moving on to our pre
2
-production, creating our mood board and this is
3
really kind of creating our style guide and
4
our anchor image you might call this or
5
reference style reference all these different terms for
6
it and a lot of people want to
7
skip this section and go I don’t need
8
this well I’m here to tell you that
9
actually yes you really really should especially for
10
AI video because you are going to generate
11
multiple images using AI to generate video and
12
you need it to look like they were
13
all filmed in the same place at the
14
same time if I was actually filming obviously
15
if I was filming a scene and there’s
16
multiple scene and people there’s all in the
17
same time perhaps or all have the same
18
costume style set design in the same location
19
with AI every time I’m generating an individual
20
image and then video and it’s very easy
21
for these images to get lost and to
22
lack continuity between them so it looks like
23
multiple different movies characters don’t match scenes don’t
24
match the field doesn’t match so we do
25
this step right here which is really quick
26
and easy but it’s going to make a
27
big difference so if I just put up
28
the slide right here you can see exactly
29
what I mean the reference image or style
30
reference it’s intended to dictate your visual consistency
31
across all future images in AI and creative
32
workflows this might also be referred to as
33
your visual benchmark or your style anchor it’s
34
a foundational role in establishing the visual tone
35
texture color palette and overall aesthetic that other
36
images should follow for example here if I
37
generate this first image like this this kind
38
of film noir image black and white of
39
a woman with the venetian blind casting a
40
shadow over her and then I want to
41
create a room in the same style this
42
is a dining room living room Hollywood 1940s
43
that kind of feel if I inside the
44
platform we’re going to use which is predominantly
45
mid-journey I can tell it generate this
46
new image I’m discussing in the style of
47
this previous image so I have that consistency
48
across shots so you can see from the
49
slide it’s crucial I could have created that
50
first shot and then spend my whole time
51
trying to create other shots like it but
52
instead because I have shots of and we’re
53
going to develop character shots here and scenes
54
main scenes that we need have those all
55
set in stone when I have those images
56
when I’m creating all my fill images between
57
those for example in my story I know
58
I’ve got a girl coloring in in her
59
coloring book when I first start and then
60
she looks out the window and then she
61
goes somewhere else I could make sure I
62
have the same character wearing the same clothes
63
looking exactly the same in each shot in
64
a house that looks exactly the same for
65
those multiple shots we generate these images in
66
this mood board pre-production section to have
67
that set in stone now this is a
68
little bit of a catch-22 for this
69
section I’m just going to let you know
70
this obviously you haven’t used these image generating
71
tools I’m going to use mid-journey for
72
this section that’s in the next section when
73
we generate in two sections time when we
74
generate images in a big section there and
75
I’ll teach you in depth how to use
76
mid-journey so you can either follow along
77
with what I’m doing right here and you
78
can see how I’m prompting on what I’m
79
doing or skip forward to that section and
80
you can see with the mid-journey lecture
81
how I’m using that tool all the different
82
aspects of that tool I don’t want to
83
cover it here and then cover it again
84
later so either skip forward or watch this
85
is probably better watch and see what I’m
86
doing if you’re missing something or it’s not
87
obvious then go forward and take those lectures
88
and come back to it but you will
89
get what I mean from here or perhaps
90
some of you are going through the whole
91
course and then you’re going to go back
92
and do these step by step whichever way
93
you’re doing it just to let you know
94
that’s the case this will also save you
95
a little bit of money not to make
96
this little lecture any longer and you can
97
skip the next one if you don’t care
98
I’m going to tell you how this can
99
save you time and money doing this before
100
we get on and actually generate some images
— Style Guides: Faster Workflow, Fewer Generations —
1
Realizing that not many of you have seen
2
Mid Journey, which is the main image platform
3
we’re going to be using, although it is
4
many, and you will have seen this earlier
5
on after the fundamental section when we did
6
the workflows.
7
You saw me use this and make a
8
whole workflow, which was great, and earlier in
9
the course touching on it.
10
But having not seen it, this section right
11
now, if any of you are looking to
12
do this on a bit of a budget
13
and don’t want to save some money doing
14
this, then this stage right here, this one
15
and the next one, these next two stages,
16
the mood board section and the storyboard section
17
can save you a lot of money and
18
time if you do this as opposed to
19
just jump straight in trying to make images
20
and then animate those for AI.
21
That’s because if I come onto the subscription
22
that I’m on, for example, I pay $60
23
a month.
24
That’s an expensive subscription or right around the
25
middle.
26
They are a lot cheaper and I’ll show
27
you in a moment.
28
That gives me 30 hours of fast generations
29
and I can do 12 concurrent jobs and
30
all these details.
31
Now, if I go to change the plan,
32
let me show you there is a basic
33
at $10 a month, but you’re limited to
34
generations of 200.
35
So if you’re going to do loads more,
36
then obviously it will cost you more than
37
that.
38
And then you’re done by hours.
39
So 15 hours of fast generation is 30
40
hours.
41
So if you’re using it a lot every
42
day, suddenly you run out of hours.
43
It’s very slow generating your images.
44
So if you want to stick to this
45
$10 a month plan or $30, but probably
46
the $10 a month plan, then when you
47
come to generate images and I’ve been there
48
too for your film, you could easily generate
49
one and go, oh no, that doesn’t look
50
quite right.
51
And do it again and again and again
52
and again until you get it.
53
You could be easily doing 10 times the
54
number of images you need to.
55
You’re going to generate multiple images for each
56
one.
57
No one is going to give you the
58
perfect image first time, very unlikely.
59
But if you do this section we’re going
60
to do right now, where we generate an
61
image to have as a style reference, you
62
are going to have to generate a lot
63
less images, saving you a lot less time
64
because these take time to generate and actual
65
money.
66
Because if you run out of generations, you
67
don’t have to go up to the next
68
plan.
69
So if you’re looking to save a bit
70
of money and especially time, I do this,
71
it saves me so much time.
72
And if you’re doing multiple projects, if you’re
73
doing a social media channel, YouTube, and you
74
want to get out three, four projects a
75
week, this will save you so, so much
76
time to just do this step.
77
And that’s why I suggest doing it.
78
So let’s move on to the next lecture,
79
where I’m going to grab the actual shots
80
that I need.
81
I’m going to get a shot list for
82
the script that we’ve been working on.
83
And then I’ll know what I need to
84
make for the next step when we’re going
85
to get creative.
— ChatGPT: Turning Scripts into Shot Lists —
1
Now, before we go on and generate our
2
images for our mood board, we need to
3
know kind of what our images are.
4
We don’t generate everything at this stage, but
5
a really quick way to do this, and
6
I suggest you do, I’ve got my script
7
here, which you will have also from the
8
project you’ve been working on.
9
And I just copy this that we’ve saved
10
before, or you can find it in chat
11
GPT where you’ve worked on this previously.
12
And I can ask it if I go
13
for this script, and then I paste it
14
in, I get to the bottom here, I
15
can go generate me a simple list of
16
each character and scene needed to be generated.
17
I am using a video to generate these.
18
So provide me a prompt for each character
19
and set scene at the end to use
20
to get these images.
21
Okay, this is actually going a bit of
22
a step further than what I normally do.
23
But I can show you this how simple
24
this can be.
25
So I’ve got my characters here, Amy, an
26
American girl.
27
And here is the prompt I can use,
28
Amy, a Japanese girl, Amy’s father, lower half
29
only, and Amy’s father, lower half only.
30
And for each of these, it’s already given
31
me a prompt.
32
Now, you don’t have to use this, but
33
I like to have it as a template
34
right here, I might decide that Amy’s got
35
ginger hair, I might decide that she’s wearing
36
something different, that she is sat somewhere else.
37
And you probably want a more general image.
38
The first thing I’m going to be doing
39
two things we’re going to generate one is
40
like a profile picture, I imagine it as
41
a full body and top half image shot
42
face on, possibly side on of each character
43
so that I have them for reference.
44
And then for each scene, we have each
45
one here with a prompt.
46
So opening scene, we’ve got an American interior
47
home.
48
And then we also have the Japanese home,
49
Amy’s expression, it’s a close up of a
50
young girl’s face.
51
We don’t need this yet.
52
Because right now for the mood board, we
53
are just generating that.
54
And then Amy’s room growing darker, you may
55
want to have a juxtaposition, maybe two shots,
56
the warm light one, and the dark lit
57
one and how that changes.
58
And then two young girls holding standing side
59
by side.
60
You don’t need this because we are just
61
generating what these scenes look like.
62
So that’s what we do.
63
So I just need my two characters and
64
a prompt, which you can change later.
65
And then your scenes, this will be very
66
different for you, obviously, depending on what your
67
scene is.
68
For me, I have each one of my
69
four characters.
70
So in the next stage, when we generate
71
this, I’m going to have four characters and
72
their profiles what they look like.
73
And I’m going to have one or two
74
scene shots.
75
And they’re going to have the right style,
76
right color and right imagery.
77
So I’m getting what all of my movie
78
looks like, and they’re all going to match.
79
So go ahead and get your somewhat of
80
a shot list with some prompts using AI.
81
And then we’ll go into the next stage
82
where we’re actually going to get creative and
83
start generating these images.
— AI-Driven Style Prompts for Stunning Visuals —
1
I’m about to generate images that I want for my movie that I’m going to be created and
2
I have a style in mind. But I want to make you aware once again for this page, AI video
3
dot school forward slash styles where I list out all different styles and what they look
4
like from blue hue to different directors and their styles. I don’t want you to waste
5
credits if you’re on a limited plan trying to do. I want that 1980s kind of retro style
6
1950 is this colorful and trying to work out what it is when we can use AI and other sites
7
to generate what these look like. First, let me go into Gemini right here. Gemini is a
8
good one for this because it’s multi modal so we can give you information and images.
9
Use this prompt 20. You could have more different movie styles with visual image examples than
10
what is a capital there for each. The visual examples should be scenes showing the style.
11
Otherwise, sometimes it just pumps out the movie poster. For example, I’ve said film
12
noir, cyberpunk and so on. So I hit enter and we can see what those styles we’re going
13
to have. So I have film noir and it’s giving me an image. So if I’m like, oh, that’s the
14
kind of style that I want. Cyberpunk looks like this. Italian neorealism. Very real following
15
people around. French new wave. That kind of romantic German expressionism. Surrealism.
16
That’s when everything’s a bit distorted like a Dali painting. Of course, animation, sci-fi,
17
Western horror, comedy, some of the obvious ones. But you can go into depth here, breaking
18
down exactly what these styles are. But on site, once again, we have quite a lot film
19
noir, Western, sci-fi, retro, some 1980s here, melodrama, golden hour, blue hues, cyberpunk,
20
steampunk, documentary style, expressionist, surrealist in the style of Tarantino, Wes
21
Anderson, Stanley Kubrick, deep focus, shallow focus, fisheye shots. So that’s on there.
22
You can go and check those out, but start making your own. If you have a style in mind
23
and perhaps you’ve seen it in a movie or a TV show, ask the Internet, what is this style
24
for this movie, this TV show? It might have a style you don’t understand. Otherwise, it’s
25
going to be very, very hard for you to generate that in the next if you don’t even know what
26
it’s called. So let’s go and generate these images now. I’m getting excited to get creative.
27
See you on the next lecture.
— Creating Visuals with MidJourney for Video Planning —
1
Now let’s generate some images and get creative
2
here.
3
I’m inside MidJourney.
4
MidJourney.com.
5
Once again in the next section, section after
6
next we’ll be going to images.
7
I’ll show you how to use this software
8
in depth.
9
You can skip forward and do that if
10
you want to, or just follow along with
11
me here.
12
And if you need to, then go and
13
get some more experience by checking that out
14
or just follow along and come back to
15
this.
16
So I’m inside MidJourney.com.
17
This is the, there is a version inside
18
Discord, which I first used.
19
And now there is this great platform version
20
online, on site MidJourney.com to be able
21
to use and generate.
22
So I’m going to generate as if I’m
23
you.
24
I’m not going to do my course project.
25
That will come in a couple of lectures
26
time.
27
So you’ll see me also generate images there.
28
I’m going to do a simple example to
29
show you what’s needed.
30
So I’ve gone into ChatGPT and I’ve said,
31
generate me a simple three minute story, sci
32
-fi about a girl and her father.
33
I want a deep and moving story.
34
So it’s generated me a story.
35
It doesn’t really matter what it is for
36
this example.
37
I’m going to show you how to use
38
images.
39
It’s about Lina and there’s, she’s in a
40
sci-fi futuristic world.
41
Her father leaves her a message and like
42
a hologram.
43
And then I’ve asked it to, sure, can
44
you get me a shot list now of
45
these characters and sets?
46
I’m going to use AI to generate this
47
movie.
48
So provide me with prompts, just like I
49
showed you a couple of lectures ago.
50
So I’m going to just take this prompt
51
right here.
52
First, I need Lina.
53
So we’re going to be collecting our characters
54
and what they look like, and then our
55
scenes.
56
And I want them to have a consistency
57
throughout them.
58
So I’m going to just copy that right
59
here.
60
I’m going to go back into MidJourney, into
61
Create.
62
I’m going to go right here.
63
I’m going to make sure my settings, I
64
want these in 16.9. That’s absolutely fine
65
for a movie setting.
66
I’ll show you more about those details in
67
the future section.
68
I’m going to paste this in and just
69
reread it.
70
A 17 year old girl with a futuristic
71
yet rugged look.
72
I’m going to ask for a cinematic still.
73
You don’t have to say that, but you
74
can do.
75
A 17 year old girl with a futuristic
76
yet rugged look.
77
Shoulder length hair, wearing worn out clothing in
78
a sci-fi urban setting.
79
She has a completive expression.
80
Cinematic lighting.
81
Don’t want to put digital art.
82
I’m going to put photorealistic and 8K, although
83
it won’t do 8K.
84
I’m going to say full body shot on
85
a white background.
86
She is looking directly at us.
87
And let’s generate that.
88
Now what I’m actually going to do is
89
generate.
90
I’m going to generate three versions of this.
91
I’m going to go R3.
92
Again, I’ll teach you these in a future
93
section.
94
And that’s going to basically each time you
95
generate, you get four images.
96
So I’m going to get 12 because I
97
did it three times because you never get
98
exactly what you want.
99
Very rarely.
100
And I want to get a big enough
101
display so I can see what I’m working
102
with, what I need to change, perhaps in
103
the prompt to get what it is that
104
I want.
105
So it’s not giving me this full bodied
106
shot for these at least.
107
Let’s see what else is generated.
108
OK, I’m getting a whitish background.
109
This is somewhat of what I need, but
110
it’s not quite photorealistic.
111
This is this is looking a little bit
112
more like it.
113
Let me see this.
114
OK, I quite like this one, actually.
115
Yeah, let’s work with that.
116
I quite like that.
117
So I want it to remix this strong.
118
I’m going to have it do that a
119
couple of times just quickly.
120
And it’s going to use this similar type
121
setting.
122
Obviously, it’s not full body yet.
123
We get to that in a moment.
124
What I want is a nice profile shot
125
of Lena so that we have a great
126
image to work with for when I’m generating
127
future shots and make sure the style is
128
the same.
129
All right.
130
Let’s have a little flick through these.
131
I do like this.
132
I like this blue hue that we’re using
133
here for the movie.
134
And that’s what I kind of want to
135
have throughout here.
136
This blue hue.
137
If I go back onto my styles, you
138
can see if I scroll down to the
139
bottom here.
140
So it further up blue hue, this kind
141
of look that’s often used in sci fi.
142
So that’s really good.
143
Yeah, I actually do like that.
144
Now I’m going to just do a subtle
145
mix of this again.
146
I’ll show you how to use this tool.
147
So it changes a few things.
148
Maybe do her clothing, maybe her hair.
149
I might not want to change anything at
150
all.
151
Let’s have a look and see what it
152
generates.
153
Now, let’s flick through these.
154
She looks slightly more computer game in those
155
ones.
156
I like the original image a little bit
157
more.
158
Let’s go back to the original image.
159
She’s got that look on her face and
160
it’s a look.
161
OK, I’m going to upscale this so I
162
have an even better image of it.
163
It’s going to generate one here.
164
I’m going to download that.
165
And then we still need, obviously, a full
166
body shot of her.
167
And I want to try and get this
168
on a more simple background.
169
Also, I’ll show you that.
170
OK, 100 percent complete.
171
This is the image of Lena that I
172
like.
173
Yeah, nice.
174
So focus right here.
175
Clothing is really cool.
176
This like backpack that I’ve got here.
177
I really like that.
178
So I’m going to download that right there.
179
And then the other thing I want to
180
do with editor, I’m just going to make
181
that smaller, move that here.
182
So we get a full body is shot.
183
Let’s submit that and have a look.
184
Let’s have a look and see what’s generated
185
for us here.
186
This is holding a gun in this one.
187
An oversized baggy jacket.
188
That’s quite cool as if it’s her father’s
189
jacket.
190
Something I didn’t think about.
191
This is creating for me here.
192
Another one of a baggy jacket.
193
And then she’s got kind of a glove.
194
I like this one.
195
Let’s remix that subtle and remix that strong.
196
And you’re seeing how we’re generating imagery here
197
and how we’re how I’m using just like
198
nuances as I get something close to what
199
I want.
200
Then I edit it slightly again in a
201
couple of sections time.
202
I’m going to show you how to actually
203
edit these images specifically, add something in the
204
background, take something away, change things specifically.
205
I’ll show you that more in the following
206
sections.
207
But this is exciting here, isn’t it?
208
So here’s the remix subtle.
209
Let’s take a little look of that.
210
Looks more Asian in that first photo and
211
it’s changing her face too much.
212
So not that one.
213
And let’s have a look at the strong
214
version here.
215
And these are the images it’s generated with
216
a strong generation.
217
It’s changed too much of her face and
218
the way she looks.
219
I like to go back to these are
220
the original ones that we made, which is
221
wearing like her father’s coat here and a
222
big jacket.
223
Yeah, I really like, although that one is
224
really cool with her sleeves up and a
225
bit.
226
She’s holding like two guns.
227
I mean, I can take that away.
228
I quite like her in a father’s jacket.
229
OK, let me upraise that one.
230
So I’ve got that one stored.
231
Here is the high res version of it
232
here in this like alleyway.
233
OK, so I want to make sure I’ve
234
got those downloads here.
235
What I’m going to do is actually if
236
I go here and then I can choose
237
or I can drop it here, I could
238
actually drop this right there.
239
So this is our close up shot we
240
just had of Lena.
241
I can do the same with this one
242
right here and it’s uploading it.
243
So I have both these shots.
244
Now I can select this shot.
245
I can choose the character of her.
246
Again, I’ll show you all of this.
247
I’m going to say this character on a
248
white background.
249
I don’t think it’s going to do it.
250
I think it’s going to take multiple generations
251
to try and get exactly what we want
252
here.
253
And we don’t need it necessarily 100 percent,
254
but I’d quite like a really clear image
255
of it.
256
So at least if it had a brighter,
257
whiter background, that would suit me slightly more.
258
So it’s taken is done.
259
Absolutely not what we want.
260
Why is this one in a chicken?
261
Sometimes I will do nothing like you want
262
it to do.
263
Absolutely nothing.
264
This is why you generate multiple, multiple images.
265
So this one is perhaps the closest, but
266
I’m going to regenerate this again.
267
I’m going to take the existing prompt.
268
I’m going to use this and I’m going
269
to just remove some of this.
270
No, I’m not.
271
I’m going to go on a white clear
272
background.
273
So it’s in twice and I’m going to
274
make sure that the image is selected.
275
So I’m telling it keep this character and
276
let’s generate that.
277
OK, cool.
278
So I did three versions of that.
279
I did two with the wider shot we
280
got on one with the closer shot to
281
see what I’m starting to generate here.
282
And I am getting like a more clear
283
background.
284
That’s actually quite a good shot.
285
I think my favorite might have been and
286
the best quality that we’ve got is like
287
one of these, although that’s not completely clear.
288
But that’s pretty good.
289
That’s pretty good.
290
That one is actually also see we’re getting
291
this right here.
292
So I’m going to store one of these
293
or several of these.
294
I’ve already got the two other images and
295
I’ll have a little bank of a few
296
images of Lena, some more clear than others.
297
So we can use that every single time.
298
And it’s got that blue hue feel.
299
And now the last thing I want to
300
do, I’m going to generate one of these
301
scenes.
302
OK, so a dimly lit, cramped, futuristic apartment
303
filled with holographic screens, old tech and a
304
few personal items.
305
The room is worn, lived in, filled with
306
a view of a dystopian cityscape outside.
307
So here is the prompt that I’m going
308
to use.
309
Copy that in here.
310
I’m going to go through there, just paste
311
that in.
312
Now what I’m going to do to make
313
sure I have the same feel is this.
314
I’m just going to copy over here and
315
I click the paperclip one.
316
So I’m telling it that’s the kind of
317
the feel that I want.
318
So a dimly lit, futuristic apartment filled with
319
holographic screens, not a cramped space or feel,
320
filled with a dystopian city outside.
321
OK, let’s take a little look and see
322
what that does.
323
Let’s actually do that.
324
I should have done multiple of those.
325
Let’s do that again.
326
Let’s do two more of those.
327
So I get 12 images in total.
328
Let’s see what we get from Mid Journey.
329
So I can see these generating already.
330
You can already see that because we matched
331
it to that original shot right here, which
332
was this one, this blue lit hue that
333
we’ve got, that is blue lit.
334
We’re going to have the same look and
335
feel throughout our entire movie because we generated
336
her image first and we’re using that.
337
And I can scroll through here and see
338
that these are generating.
339
Yeah, some of these a bit messy.
340
Some of these don’t look that futuristic and
341
I can keep working on them.
342
Let’s have a little look and see what
343
they got here.
344
That’s got a person in there that I
345
could remove quite easily.
346
This is quite a cool apartment shot.
347
One of these, one of these.
348
Maybe that’s a bit freaking scary and it’s
349
messy.
350
I quite like one of these.
351
Actually, that’s quite nice.
352
Looking at that, one of these apartment.
353
Okay, futuristic apartment filled with holographic screens and
354
old technologies.
355
Cramped space with a worn lived in feel.
356
What if I put in right here, I
357
want to use this prompt and this style
358
is still on here.
359
I’m going to say living room.
360
Let’s generate that.
361
Take a look at some of these.
362
These are looking quite futuristic, quite cinematic with
363
the look and feel of this.
364
Oh, here we go, because we’ve got the
365
screens here.
366
She’s actually in the shot right there.
367
But we can start working on some of
368
these and I can even start taking away
369
some of this.
370
So if I just come up here, I’m
371
going to say a dimly lit futuristic living
372
room filled with this.
373
I’m going to take away a cramped space
374
with a lived in feel.
375
I’m going to take that out right there
376
and you’ll see how the smallest amount of
377
text can change the whole image right here.
378
So let’s just generate that submitted.
379
Let’s wait.
380
And this is the fun part.
381
I love getting creative like this.
382
And yes, it feels like we’re generating quite
383
a bit.
384
But once I’ve generated this and I’ve got
385
those shots of Lena and her apartment and
386
I’ve only got several scenes.
387
You can see that because like when I
388
use this image of Lena with the paperclip
389
to get the same feel, it’s going to
390
save us so much time on our future
391
images because we made this mood board.
392
Let’s look at some of these here.
393
OK, this one, this one, I quite like
394
this.
395
This has got a futuristic kind of feel
396
to it.
397
Remove that.
398
I’m going to try one more prompt.
399
A dimly lit futuristic apartment, sci-fi, living
400
room, holographic screens and technology, I’m going to
401
say.
402
Let’s take that away.
403
Let’s launch this.
404
Let’s take a look at some of these.
405
OK, yeah, this is quite cool, isn’t it?
406
With these screens over here and it still
407
looks like a messy.
408
We’ve got that definitely that cyberpunk feeling, which
409
it keeps saying in here.
410
It’s inside my prompt.
411
Multiple screens on here.
412
Yeah, and it’s got that, yeah, that dystopian
413
city outside that kind of Asian city feel
414
metropolis.
415
Yeah, I really like it.
416
You could keep playing with this over and
417
over until you get it perfect.
418
Obviously, in fact, we could move out the
419
cyberpunk if we want to remove that because
420
we have that image already telling us the
421
kind of look we’re going to have.
422
If I just remove that, we can see
423
what we’re going to get.
424
But the next stage.
425
So now I’ve showed you to generate a
426
character, get that and then use that feel.
427
And we’re going to generate a scene.
428
Perhaps you’ve got multiple scenes and multiple characters
429
in exactly the same way.
430
I won’t create more characters with you because
431
you just do the same thing again.
432
Generate the characters that we have and the
433
scenes.
434
And then we’re going to put these together
435
inside our document that we’re making to create
436
our mood board.
437
And I’ll just quickly put that together in
438
the next lecture before I go off and
439
I actually make the following along with this
440
course project where we make the mood board
441
for the movie I’m creating in the 1940s.
442
Finally, I’m going to look at this.
443
All right, nice.
444
Now I’ve taken away cyberpunk.
445
I’m getting a bit more of a feel
446
that I want.
447
Ignore the characters on this because, yeah, that’s
448
got a few screens there.
449
This is quite a cool one.
450
This one right here.
451
You can imagine I sat down and she
452
takes a call.
453
She releases a hologram from her father here.
454
So you might keep that image right there.
455
I could change it obviously slightly, remix it
456
subtle, remix it strong.
457
And in the next lecture, I’m going to
458
put these together and show you what I
459
do when I create a mood board.
460
I’ll see you on the next lecture.
— Step-by-Step: Creating an AI Video Style Guide —
1
So you actually probably want to place your
2
mood board down for some of you have
3
constant reference to and how you do this
4
is completely up to you.
5
I’m going to show you some things, something
6
you probably haven’t thought about actually, things that
7
I do.
8
So if you had your, you’ve been working
9
on your script right now, we have our
10
script, we have our shots that we broke
11
down and all the other stuff, you could
12
just have your mood board on the end
13
here.
14
If I was to generate this, I could
15
just have mood board and have my images
16
and it makes absolute sense just to have
17
it underneath here and start having your images.
18
I sometimes like to have a visual on,
19
sometimes I keep this on my desktop or
20
whatever.
21
I use Photoshop, you could be using pixel,
22
you could be using Canva, any tool really,
23
it doesn’t really matter what it is that
24
you are using.
25
Now I do something here, I have font,
26
main, prompt notes, character 1, you’d also have
27
character 2, 3, 4, 5, however many you
28
have and scene 1, you also have scene
29
2, 3, 4, 5 and I just add
30
these images into here.
31
So if I go to my downloads and
32
I will just grab these out here that
33
we just generated.
34
So I’d have character 1, Lena, I would
35
store this onto here, start making my mood
36
board.
37
Great, I also have a different shot of
38
her, a wider shot to be using for
39
reference and I keep these obviously in my
40
downloads also to make sure you have them.
41
So here’s my characters here and then my
42
scene that we were developing, scene 1, and
43
you could obviously have multiple scenes depending on
44
what it was.
45
So I’d put my scene right about here
46
or something and if I just go back
47
to, if the movie is down, go back
48
to mid journey, I can see what my
49
main things I was using for to prompt
50
with.
51
So if I go back to the shots
52
we were doing here, cinematic still, futuristic yet
53
rugged look, what I would do is I
54
would have, I’d paste this in and start
55
taking away.
56
So it reminds myself of some of the
57
let’s go here, let’s just, don’t like that,
58
let’s paste this futuristic yet rugged look.
59
I don’t need any of that worn-out
60
clothing.
61
I like that in a sci-fi urban
62
setting.
63
Cinematic lighting, I asked it for photo realistic
64
and the rest of it not needed.
65
And then also the same thing for my
66
scene when I was developing that and we
67
were wanting our scene, we had, if I
68
use this text, let’s get rid of this,
69
let me use this prompt, futuristic apartment sci
70
-fi holographic screens on a view of dystopian
71
city, I quite like.
72
And I’m also going to add in there
73
blue hue and everything else we were using,
74
blue hue, which if you remember is right
75
here on the site that we’ve been working
76
on and shared with you for AI video
77
school.
78
Let me just move that down, blue hue
79
and then also we have cyberpunk.
80
Just some terms to remind myself of some
81
things that I use that when I want
82
to be prompting, I might want to ask
83
it for so that I always have that
84
on my sheet.
85
Now font, you’re going to want some title
86
fonts.
87
I expect it’s also a great way to
88
just get a kind of feel for things.
89
Now this is a sci-fi movie and
90
I want a sci-fi futuristic font.
91
So I could be looking through Photoshop or
92
whatever software we have right here.
93
But also if I just go to Google,
94
my favorite is DaFont.
95
I use DaFont and you can search in
96
here.
97
If I search for sci-fi, I can
98
start getting some sci-fi style.
99
These are perhaps a little bit too much.
100
If I look through, come on the techno
101
sci-fi fonts.
102
When I look through here, there’s an actual
103
Star Wars style font right here.
104
Squid games one.
105
This is quite nice.
106
So is this.
107
Let me keep going.
108
Oh, I quite like this too.
109
This is called June Rise.
110
Okay, let me download that.
111
And if I open that up from my
112
downloads, all I have to do to get
113
that into the software that I want to
114
use, if I drag this onto screen, June
115
Rise, if I just open this up, it’s
116
going to say, oh, this is the font.
117
Do you want to install it?
118
Sure, I’ll install it.
119
Close this, go back into Photoshop where we
120
were and I could now search for June
121
and it’s here, June Rise regular.
122
So this, I think, what was the story
123
title they gave this?
124
Here’s the script right here.
125
Oh yeah, the last message.
126
So I could put this on here and
127
I could change this to the last message.
128
And here’s the font I like to use.
129
You can, of course, have more than one
130
font.
131
Here’s a title, but you might want to
132
have a more, what am I saying, a
133
more readable font or general font, something like
134
if I go Helvetica light and I could
135
do other font to use, something like that.
136
And it’s just more of a generic font
137
to be using alongside there.
138
So I could have title and I could
139
have perhaps credits or whatever inside that font.
140
I like to have that because it really
141
kind of brings in the feel of what
142
things are.
143
And we could go a step further.
144
Sometimes I do this.
145
If I move these over, I’m going to
146
move the characters here.
147
Let’s keep the character there.
148
Let’s put the scene right over here.
149
Scene one, I like to put colors.
150
Let’s make that here.
151
And if I was to choose specific colors
152
that I want, I like to pick these
153
from the shot like this, this color right
154
here.
155
Let me grab a shape right now.
156
Feel I don’t need a stroke for it.
157
This feel okay.
158
Let me choose another color.
159
This darker bit right here is pretty good.
160
And we’ve also got some of the blacks
161
and things that she’s using or really dark
162
blue we’ve got right here.
163
So here’s pretty much my colors, maybe in
164
hierarchy.
165
If I like to divide up what my
166
movie is going to be, and I can
167
see these colors right here with this blue
168
hue are present in here.
169
They’re definitely present in here, especially these bottom
170
two.
171
And I can make sure that every single
172
time I’m using it, make sure the blue
173
hues in the prompt.
174
I’ve got my fonts and the feeling.
175
I’ve got my main character and you’d have
176
multiple of those.
177
I’ve got my main scene.
178
You’d have multiple of those.
179
I’ve got my colors just to make sure
180
that everything’s matching up.
181
Once again, you’d have to use Photoshop for
182
this.
183
You could just be putting it right into
184
here.
185
You could be using any of the free
186
versions of Pixlr or whatever else you’re used
187
to using this online to put this together.
188
But it’s great to visualize what this looks
189
like, my mood board.
190
And I could just neaten this up.
191
Then I can store this and I can
192
present this and keep it however I want.
193
Just fonts, characters, colors, scenes, and then prompt
194
notes.
195
And you’re pretty much good to go.
196
So now in the next lecture, I’ve made
197
this for this made up story that we
198
had for the sake of this.
199
We’ve been following along with a course project
200
that I’ve been working on all the way
201
from idea, script.
202
And now we’re going to go to mood
203
board all the way through to make images
204
and video for this.
205
And we’re going to send it off, edit
206
it and send it off to film festival
207
and see how this does.
208
So let’s follow along with that.
209
And I’m going to make a mood board,
210
my characters and for my project.
— Course Project: Crafting the Mood Board —
1
So now it’s time, if you’ve been following
2
along, to do this for our course project.
3
You know that I’ve gone from all the
4
way from the beginning to generating an idea
5
for this, to generating a script, getting some
6
bullet points down for what the characters should
7
be like, and now let’s do our style
8
guide for this project, and then you’re going
9
to see me create images, storyboard images, animate
10
these into a video, edit this together, and
11
send this off to Film Festival to be
12
a final video so you can follow along
13
all the way from beginning to end.
14
So I’m going to make the style guide
15
or the mood board for the story we’ve
16
been working on.
17
That’s, if you’ve been following along, that was
18
this.
19
It was this story of two young girls,
20
one in the USA, Pearl Harbor, one in
21
Japan, four years apart or three years apart,
22
and them both having loss of their fathers
23
and kind of that emotional impact and the
24
impact of war on those we don’t always
25
think about.
26
So I’ve asked, I said, OK, so here’s
27
the script.
28
Can you divide this down?
29
The script for this is generate me a
30
simple list of characters and scenes needed to
31
be generated.
32
I’m using AI video to generate these.
33
So provide me with a prompt for each
34
character set and scene.
35
And it broke it down here to Amy,
36
Amy, Amy’s father, Amy’s father, and then some
37
scenes.
38
So if I go back into this is
39
where I’ve been copy and pasting in the
40
script that we’ve been working on and you’ve
41
probably got a very similar project to just
42
like this.
43
If you’ve been following along, that is step
44
by step and just in the same one,
45
rather than use Photoshop like we did before,
46
you saw in the previous lecture, made something
47
like this on the mood board.
48
Let’s just do it much more simple.
49
Let’s just keep it inside here and I’ll
50
show you how to do that.
51
So you have both options.
52
So I’m going to go from the top
53
font colors, then the characters will generate these.
54
And then also scenes I need to add
55
here, too.
56
So let me just add scenes and we’ll
57
get to that.
58
So from the top, I’m thinking this is
59
set in the 1940s.
60
I really want to make it feel like
61
it was set in the 1940s.
62
That is match with fonts for titles of
63
things and colors.
64
So I was just doing a little bit
65
of research and I was just looking in
66
1940s films.
67
I was looking at the fonts that we’re
68
using, this kind of scripted curve that’s used
69
a lot, as well as this this style
70
right here.
71
I don’t even know what you call this.
72
But if I go through everything from 1940s
73
advertising, can see that again.
74
And then if I go 1940s posters, this
75
kind of style you see just to the
76
right of my cursor right here, which is
77
the same as like we can do it
78
and things.
79
Now, if I go on to fonts, you
80
saw me download these for free.
81
I just searched 1940s fonts on Google and
82
see what was coming up in different font
83
sites.
84
This is 1001 fonts.
85
And if I scroll down, I can see
86
what’s here.
87
Some of these are good.
88
Some of them not so much.
89
This one, I think, is very similar to
90
this style.
91
You see the great dictator or some of
92
the advertising type brands.
93
This one right here.
94
Where did I go?
95
Sorry.
96
The top Boogaloo regular.
97
So I’m going to download that.
98
I’m going to just make a note to
99
myself when I come back to here.
100
Boogaloo regular.
101
And let’s keep going.
102
That wasn’t the only font that I wanted.
103
Now, this is that scripted curve.
104
Look at this one.
105
Advertising script bold.
106
This is definitely that style that we see
107
in the first one, kind of this.
108
And it’s a wonderful life.
109
Also, let’s definitely take that advertising script bold.
110
Make a note.
111
Let’s keep going.
112
I want to have myself maybe three or
113
so, four maybe fonts.
114
This is definitely this.
115
This looks very much actually these two do.
116
So there’s Rick’s, which is probably taken from
117
the film Casablanca, which the main character is
118
Rick set in wartime in 1940s and yesteryear
119
regular.
120
So let’s do Rick’s and yesteryear.
121
Rick’s and yesteryear.
122
So I’m going to have about five or
123
so fonts, I reckon.
124
And then what I was thinking was that
125
if it was Amy and Amy, we’ve got
126
Western Amy in the U.S. And then
127
we’ve got Amy in Japan that I wanted
128
perhaps a more Japan inspired, Japanese inspired font.
129
Now, some of these are obviously way too
130
far like like this.
131
I don’t need it to look exactly like
132
this.
133
This looked like if I used it, it
134
would be culturally inappropriate almost.
135
But let me keep going.
136
There was something like this.
137
So it’s not too dissimilar to this near
138
Gator, is it here?
139
Not too disfamiliar from yesteryear, but just has
140
that kind of Asian feel to it.
141
So let’s do that one.
142
And then exactly the same way.
143
I’m just going to open these up and
144
each one at a time.
145
Just install these very simply.
146
Double click it.
147
I’m on a Mac might be slightly different
148
if you’re on PC, but install.
149
And now it’s done.
150
Let’s do that for every font.
151
So I’ve downloaded all of those.
152
All I have to do is if I
153
double click one of these, go on the
154
dropdown.
155
Let’s search that.
156
That’s the reason I put in the name
157
on here.
158
There’s Boogaloo.
159
Then I could do the advertising.
160
And there we have it.
161
I’ve just put all my fonts in here.
162
Fonts really dictate kind of a style and
163
a feel and generate an emotion just like
164
music would.
165
I really feel that fonts do.
166
So I’ve got myself in here five fonts.
167
There’s no way I’m going to use them
168
all.
169
Obviously, I’ll narrow this down to perhaps two.
170
But good to have them in there.
171
And it sets me kind of with options
172
and a theme.
173
Now brand colors.
174
If I go back to here, let me
175
show you this advertising on what we’ve got
176
here.
177
And see if you can see some similarities
178
between these.
179
I’m definitely feeling if I go back to
180
images might be better for this.
181
I’m definitely feeling that this red, yellow, orange,
182
reds, yellow, orange, maybe a punch of these
183
blues, red, yellow, orange, and a tiny punch
184
of that blue again.
185
OK, that’s what I need to know here.
186
So I’m just going to go insert table.
187
Let’s do this.
188
So it was definitely that red color was
189
a bright red.
190
It’s almost like a slightly dark red.
191
OK, let’s change that.
192
This kind of red.
193
Then there was the yellow again.
194
It’s quite a bright yellow.
195
Let’s add that in somewhere around here, maybe.
196
And there’s definitely these orange hues to this,
197
that kind of muted color.
198
Let’s add something in here.
199
And then an absolute push.
200
There was this pale kind of blue that
201
I saw every now and again pop up.
202
And it’s in here.
203
Let’s add this more like this kind of
204
a color.
205
OK, and that’s there.
206
So here I’ve got my color palette that
207
I like here.
208
Obviously, also, there’s going to be black involved
209
on here.
210
You can see in all shots for text
211
and things.
212
But as far as my images go, these
213
are kind of my color palette, my 1940s
214
color palette that I’m going to work with.
215
OK, we’re building this up.
216
Style guide, fonts, colors.
217
Let’s get into our characters.
218
So I just copied and pasted this from
219
over on our chat GPT prompt that we
220
had here asking us to break it down.
221
And it made us prompt.
222
Doesn’t mean that’s our final prompt.
223
But let’s go in and double check this.
224
A young American girl about eight years old
225
with short brown hair wearing a simple 1940s
226
cotton dress.
227
She sits at a wooden table with crayons.
228
The setting is a 1940s American home with
229
warm morning light.
230
I’m interested to see what that prompt just
231
generates by itself.
232
So let’s go in over to mid journey.
233
Let’s paste that in right here.
234
Again, in a couple of lectures time, I
235
show you how to use this fully.
236
Let’s just see what that generates and see
237
what our character is looking like and see
238
if it has that right feel.
239
I haven’t told it anything like photo realistic.
240
It doesn’t know what style.
241
I haven’t told it any colors or anything.
242
I’m just interested to see what it generates
243
first.
244
OK, here we go.
245
So let’s look at these.
246
It is realistic for sure.
247
Yep, definitely looks like could be 1940s even
248
did it with the haircut.
249
It seems right here.
250
Maybe not this one so much, but definitely
251
the dress and everything right here is 1940s.
252
OK, let’s give it a little bit more
253
information right here.
254
A young American girl about eight years old
255
with short brown hair wearing a simple 1940s
256
dress.
257
She set a wooden table with crayons.
258
The setting is a 1940s American home with
259
warm morning light.
260
1940s decoration to the house.
261
I just want to see if I can
262
get a feel for this.
263
1940s aesthetic to the image and feel.
264
Very generalized.
265
I haven’t told it too much else.
266
Let’s let’s hit this.
267
I just want to take away this warm
268
morning light a second and see what it
269
comes out with.
270
See if that’s screwing the color palette that
271
I want slightly.
272
OK, now we’re getting so I really like
273
this feel right here.
274
Definitely has a 1940s feel and so does
275
this one somewhat for sure.
276
OK, let’s play with this a little bit
277
more.
278
All of them this time is that outside
279
the window we see Pearl Harbor, which is
280
an extremely rare thing that they’re going to
281
live exactly on the harbor.
282
But just to get kind of a feel,
283
I don’t think it’s going to make that
284
exactly.
285
And then I always say we see Navy
286
military decoration in the shot.
287
I just want to start building this up
288
a little bit and just see what we’re
289
going to get.
290
OK, let’s see what that generated right here.
291
Here we go.
292
Here we go.
293
Didn’t really do too much if I compare
294
it to, say, this shot that definitely looks
295
like 1940s.
296
So does this.
297
But do I get a different feeling with
298
Amy herself?
299
I’m going to run a few more generations
300
here and then I’m going to come back
301
to you and show you my results.
302
So I’m back and I’ve been generating quite
303
a few images playing with some different things
304
here.
305
So I’m going to show you some examples.
306
Obviously, I was starting to get somewhere is
307
1940s black and white.
308
I had to make sure I eventually said
309
full color in the prompt.
310
I’ve really loved this kind of feel.
311
I don’t know if it was this exactly.
312
And it’s because this is outside.
313
If I have an external shot, then definitely
314
this feel.
315
But I started to use the style to
316
get kind of this color coming in when
317
they’re inside.
318
They’re more brilliant colors, if you want.
319
I started using the prompt here.
320
Mellow drama feel at Douglas Sirk Technicolor.
321
We started bringing back some feels.
322
Now, Douglas Sirk is the director.
323
Actually, if I go on to here and
324
go styles on the here, if I go
325
into a mellow drama, this is what Douglas
326
Sirk did in the 1950s.
327
But definitely some correlation over here on the
328
1940s.
329
So I was getting this feel.
330
Look how beautiful these colors are.
331
Actually, that’s one I want to upscale.
332
So I had a couple of shots.
333
I started getting that symmetrical and the final
334
prompt wording I was using.
335
Again, look at this grade.
336
I love this 1940s for sure.
337
This is one of my final images I
338
want to go with.
339
And here’s some of the prompting that I
340
used.
341
Girl in the middle, symmetrical shot.
342
I use Technicolor.
343
And also, if I go back to here,
344
Technicolor 1940s film stock was also another one.
345
So let me just use that.
346
I’m going to copy this and remind myself
347
here what my other prompt was.
348
I’m going to put this down here.
349
OK, let me go back into here.
350
Here’s some of my images I want to
351
work with.
352
I love this coloring and definitely feels dated.
353
Seeing as we’re using AI, we can pretty
354
much set our times in anything we want.
355
I can show off a bit of a
356
flex here and I can show, hey, I
357
can make this whole thing feel and look
358
1940s if I wanted to.
359
That’s fine.
360
I’m going to do things like I will
361
add in eventually when it comes to making
362
our shots.
363
I will say add in a naval hat
364
here like this.
365
And we’ll start to do other things as
366
we start to generate these.
367
But right now, I just want to get
368
an image of both of these girls for
369
my shot.
370
So let’s download that.
371
And I want to download this.
372
And when we come to do our storyboard
373
next, I will finalize one of these and
374
I will start generating it.
375
But right now I want American Amy.
376
OK, here we go.
377
So now we’ve got our first character.
378
Here’s Amy and the feel of everything.
379
Definitely matching here.
380
Look at these little bits of blues I’ve
381
got punching in here and the dress and
382
the color and her dress here.
383
And then these beiges and definitely, definitely work.
384
So that was the first Amy.
385
Let me go through it.
386
I’m going to do exactly the same thing
387
and I need to have exactly the same
388
feel, of course, with this next character.
389
First of all, I’m just going to paste
390
in that and just see what the results
391
bring up for our result for Amy in
392
Japan.
393
OK, these look nice, actually really nice images.
394
But obviously this looks way too modern day.
395
So we need to adjust this prompt and
396
to match some of our other prompts.
397
So if I just use this, actually, it
398
was this one right here.
399
I want to say Technicolor and 1940s.
400
So let’s use this prompt again.
401
This time I’m going to say Technicolor, 1940s.
402
A young Japanese girl, eight years old, dark
403
straight hair, wearing a Yukoza or kimono in
404
soft, solid colors.
405
Let’s take this away.
406
She sits at a low wooden table, color
407
pencils in traditional Japanese room, morning sunlight, shoji
408
sun.
409
I’m just going to take the morning sunlight
410
away from.
411
Actually, let’s run one now with this and
412
then let’s run one without.
413
Brilliant.
414
Now I’m definitely getting more of not this
415
one so much, but this one.
416
Look at this.
417
Definitely a 1940s feel to this.
418
Really good at similar grain than we had
419
before.
420
Not that one, not that.
421
Maybe this one, but it’s like too much.
422
This one is slightly over stylized, but I
423
really, really like this one.
424
Let’s do a subtle and a strong.
425
Let me also, while I’m here, use this
426
prompt and then I want to go back
427
to our image right here that we had.
428
And I want to just use the style
429
on this.
430
Again, I’ll show you all of how to
431
use this in the coming up sections when
432
we talk about imagery.
433
Really beautiful color that we have here, especially
434
that one.
435
Look how nice these were.
436
I’ve used the style.
437
It definitely made it way more modern.
438
That one is kind of 1940s.
439
I do like that style.
440
I’m going to download two.
441
I’m going to upscale this.
442
And then also I think the one I’m
443
going to go with original one was one
444
of these.
445
I think it’s this one that I like
446
the most.
447
Let’s upscale that one and keep these two
448
images.
449
And I will decide when I come to
450
my storyboard section exactly the final details as
451
we start doing most of these in the
452
next section.
453
Now I’ve got Japanese Amy complete in exactly
454
the same way.
455
Let’s put those in and I’m really starting
456
to build these up now.
457
We’ve got character one, Amy, character two, Amy.
458
On looking at these matching each other, it
459
seems like this is going to be too
460
punchy.
461
I could play with this if I wanted
462
to because the background is very nice and
463
get it like this.
464
But I think this shot really says classic
465
1940s, which is great.
466
OK, let me do the last two characters
467
right here in exactly the same way.
468
Let’s copy these in here.
469
I think if I can just read this,
470
the lower half of Manway, 1940s, US Naval,
471
one hand resting on a young girl’s shoulder.
472
The scene is warmly lit and this figure
473
partially obscured by a sense of distance.
474
I don’t want to put this on here
475
just yet.
476
Let’s do this.
477
Let’s do also Technicolor 1940s.
478
We know that seems to be working somewhat.
479
I’m also just going to copy this, remind
480
myself of the prompt that was used.
481
Very important to start storing these up.
482
OK, so I’ve got my prompt Technicolor 1940s.
483
The lower half of Manway in the 1940s,
484
US Naval uniform.
485
The scene is warmly lit and he’s a
486
figure partially obscured by a sense of distance.
487
Let’s see what that comes up with.
488
I might actually, while we’re here, lower half
489
Navy uniform.
490
The scene is warmly lit.
491
This is where I can already see that
492
obviously AI is getting a little bit confused.
493
It’s like, why do you want the lower
494
half?
495
What is this?
496
Starts giving me some other shots of Navy
497
officers that it’s learned.
498
So this is going to take a little
499
bit of playing with, like I’ve added back
500
in the resting on the girl’s shoulder, etc.
501
So let’s see if I can just have
502
a play with this and finally get the
503
prompt that makes the AI recognize.
504
Yeah, something more like this.
505
Similar, let’s see.
506
OK, finally, what we’re getting here, you see
507
I’ve got lower halves and I’ve got what
508
looks like his daughter.
509
Like he’s saying goodbye to her.
510
Exactly what we want.
511
Definitely a Naval style uniform.
512
Similar things here.
513
So I use Technicolor 1940s.
514
The lower half of Manway in the 1940s,
515
US Naval uniform.
516
Standing with one hand resting on the young
517
girl’s shoulder.
518
He’s in the living room of a 1940s
519
American house.
520
So let’s play with these two.
521
This one and this one.
522
I really like.
523
Let’s remix this subtle because I’m not sure
524
about his shoes he’s wearing.
525
And let’s do the same thing here.
526
Subtle.
527
Let’s see what these results are.
528
And then this upscale one of these to
529
store this.
530
OK, great.
531
I can already see that the shoes are
532
changing and all of these to be smarter
533
shoes.
534
I like that a lot.
535
And also with this here we have Amy.
536
So the only thing we would do is
537
obviously match the dress and things.
538
If we were to use a shot similar
539
to this.
540
So let’s upscale two of these.
541
I think this one right here has the
542
best.
543
Let’s upscale that again.
544
I like these colors that they punched in
545
here to match what we had before.
546
And then these four.
547
Let’s see something like this.
548
And I think upscale this.
549
So now we have her father and that’s
550
all sorted.
551
So let’s add those to the sheet.
552
And also while I’m waiting for those, the
553
lower half of a man traditional.
554
Let’s start trying to do Amy’s dad also.
555
So I’m going to say Technicolor 1940s.
556
The lower half of this still man attire.
557
Generally just a young girl’s head of ribbon.
558
We see him only bottom half in a
559
traditional Japanese home.
560
We call it a living room still.
561
So it understands.
562
Let’s play with that.
563
OK, so let me download these while we’re
564
waiting.
565
Insert these.
566
OK, very good.
567
Lovely.
568
Now let’s do Amy’s dad.
569
Let’s go back and see what it’s been
570
generating.
571
This was good, except we know in 1940s
572
he was going off to work in a
573
factory.
574
If I go back to the script, he
575
was going off to work.
576
So I don’t think he would be in
577
traditional dress as in like a kimono style
578
or whatever it’s called for here.
579
Let me go here.
580
Was young girl Amy traditional attire stands beside
581
her.
582
Yeah, I think we need to change this
583
to not that.
584
Now the lower half of man in 1940s
585
Japanese workers uniform.
586
Let’s see what that comes back with.
587
Then it’s a girl’s head with a ribbon.
588
We see in his bottom half the Japanese
589
room.
590
OK, let’s have a little look.
591
OK, playing with some of these here.
592
I mean, the color grading looks great for
593
that, doesn’t it?
594
Look how nice that looks.
595
Let’s keep playing, but it’s still giving me
596
like a kimono.
597
So I’m going to have to probably dictate
598
what it is.
599
I want him to wear a Japanese in
600
man in 1940s Japanese fashion trousers and shirt.
601
OK, now we’re getting somewhere.
602
I really like this.
603
Look how nice this looks.
604
So we definitely got some kind of dated
605
looks 1940s with this style.
606
Let’s keep going.
607
That’s nice, but I don’t want to see
608
his face so much.
609
Oh, that’s nice.
610
Still also.
611
Yeah, definitely.
612
Oh, that’s really nice.
613
Actually, let’s go with.
614
I want to do this one.
615
I want to upscale that one.
616
And also, I really like this one also.
617
And let’s store those.
618
And the last thing I want to do
619
is to get the scenes.
620
Obviously, just going to copy over the final
621
prompt that I use for that to remind
622
myself.
623
Let’s paste that in here.
624
And then I want to add my two
625
images here.
626
Let’s download both of these.
627
Much better having this in a less traditional
628
Japanese.
629
She’s still in clothing, but for him.
630
And I think it really draws together the
631
similarities between the two characters.
632
OK, Amy’s dad.
633
Here it is.
634
OK, so the only two other scenes I
635
really want to do here is Amy’s home.
636
And we’ve already seen some of this.
637
Amy’s home.
638
And perhaps I’m quite interested in doing a
639
shot from the window.
640
So let’s first start with Amy’s home.
641
Now I can scroll back up here.
642
Actually, it’s in chat GPT and it can
643
describe Amy’s home for me.
644
Amy’s home, Pearl Harbor, a warm 1940s interior.
645
Yeah, let’s start playing with this once again.
646
We know that Technicolor and 1940s gets me
647
the look that we are going for.
648
Simultaneously, actually, I’m going to do this.
649
I’m going to generate Amy’s in Hiroshima.
650
OK, let’s do this.
651
OK, coming back, but definitely not wide enough.
652
Let me do this.
653
A wide shot establishing shot.
654
And let’s do that.
655
It’s probably going to be the same thing
656
for Amy in Japan.
657
Let’s see.
658
It looks slightly wider, but not wide enough.
659
Let’s do wide shot establishing shot.
660
But I really do like this window.
661
Look at this where you could see if
662
there was an explosion out there.
663
OK, let’s have a little look at this.
664
Nice, it’s got the ships out in the
665
back, right?
666
Let’s call this.
667
Let’s once again use this prompt, but let’s
668
go.
669
Very wide camera angle establishing shot.
670
See what Japan one done.
671
It did the same thing.
672
OK, let’s call it very wide.
673
OK, it’s not doing it.
674
There must be something in my problem that
675
I’ve overlooked.
676
Very wide camera establishing shot Technicolor.
677
A warm American house interior of a warm
678
table and crayons spread out.
679
A navy hat rests on the edge of
680
the table.
681
Navy ships visible through the window in the
682
background with soft morning light.
683
OK, so another way to do this if
684
I wanted to.
685
If I go to editor again, I’ll show
686
you all of this in the following lectures.
687
I could just do this like this.
688
Let’s submit that and let’s force it to
689
give me a wide shot.
690
So let’s have a little look at what
691
we see right here.
692
OK, here is this.
693
That’s nice.
694
But does it match what I have in
695
my folder so far for Amy’s dad?
696
Yeah, yeah, it could match that home for
697
sure.
698
Alrighty then.
699
So that took some work.
700
So I’ve got the kind of setting scenes
701
that I want here.
702
They’re not the final ones and you won’t
703
have a shot this wide.
704
It’s very rare that you will have this.
705
I actually have a couple for the shot
706
that she could be sat at.
707
This could easily be the shot where she
708
sat right here, of course.
709
And then out the window you see the
710
naval ships and stuff.
711
And then for Amy in Japan, this is
712
the shot that I’ve got here.
713
Really nice view right there looking out of
714
the window over the city.
715
So once we do have a shot where
716
we look out of that window, we can
717
see the explosion happening.
718
So I’m just going to download these.
719
So now I’ve got both those shots in
720
here to give me a style reference for
721
that.
722
And the living room here.
723
I’m not going to do the window scene
724
in this because the next section where we
725
do the storyboard is where I’m going to
726
add a lot more shots.
727
But I have a reference here that I
728
have a window open that you can see
729
that Amy could run to and look out.
730
And also here sat by a great big
731
window.
732
And then this is more like a style
733
reference for colors and things like that.
734
So now I have my entire mood board
735
created for this.
736
I’ve got my fonts I’m going to use.
737
You can imagine this coming up and Amy’s
738
name in this font or the starting graphics.
739
You could do some really cool stylized.
740
Mix this with a Tarantino style where you
741
have a name come up and you’ve got
742
this modern way you’re doing titles with an
743
old style font and an old style movie.
744
That’s going to be kind of cool.
745
So we’ve got Amy sat here coloring.
746
We’ve got Amy sat here coloring.
747
It’s probably going to be this shot predominantly
748
and this shot.
749
And then we’ve got Amy and her father.
750
And then we’ve got Amy and her father.
751
This is really nice.
752
And then we’ve got the scenes.
753
So now I have my mood board.
754
We’ve done most of the work.
755
That’s all stored there.
756
You’ve downloaded these.
757
You have these images here inside your project
758
and they’re inside mid journey.
759
So we can now move on to the
760
next section.
761
Let’s storyboard this, which is taking these styles
762
that we’ve got on these characters and put
763
it together a simple 12 kind of step
764
to tell this story.
765
There’s going to be this skeleton for our
766
entire project.
767
And then after that, we’re going to be
768
able to fill those in and animate that
769
and make our final movie far more effortlessly.
770
Get this done first and then I’ll see
771
you on the next section.
772
Task for you next.
773
Enjoy.
— Task: Create Your Style Guide for AI Videos —
1
I hope you enjoyed seeing me create that myself and finally in a section I know it’s been
2
a long time coming with all the pre-production stuff we’ve been doing and learning and fundamentals
3
finally getting a bit creative with you and it’s only going to get more and more now in
4
the next sections it’s creative from here on out which you’ll be glad to hear so I have
5
a task for you here not surprisingly let me put the slide up on screen then I’ll explain
6
the objective is of course to develop a visual elements that convey the style and feel of
7
your video that you’ve been working on the project using AI tools so what I want you
8
to do is generate key images using the tools so mid-journey predominantly perhaps you’ll
9
have another one create visuals that capture the mood and theme of your project focus on
10
settings and character designs that will align with the narrative and style that you want
11
to portray so this key design settings here identify the primary settings for your video
12
so we got our shot list from chat GPT so is it all in one location or is it in two three
13
locations four locations get those down and then generate images to reflect these environments
14
in the styles that you want the tone color is it film noir is it cyberpunk etc what’s
15
the feel of it and then do the same thing for characters pretty much we get a list of
16
our characters and I want you to generate a great at least profile picture photos characters
17
in what they’re wearing what they look like etc and then save all of this of course we’re
18
going to have these saved and stored along with all everything else you’ve been doing
19
for this project to have it to move forward we are nicely collecting everything that we
20
need here for our story so in the next section I’m just going to make a bit of a storyboard
21
that’s some more images now we’ve got our cohesive look for our for our video we can
22
create actually create our storyboard maybe for mine I’m going to have something like
23
1012 images or whatever and there will be that’s the beginning of my story I can actually
24
lay those on my timeline and it will be a readable story before we then add little shots
25
in between those and animate them to make a moving visual story so that’s the next stage
26
doing our storyboard which is really generating some of our first images our skeleton if you
27
like for our script and our visuals all in the next section after this task looking forward
28
to seeing what you guys have done I’ll see you in the next section
— Storyboarding: Your AI Video Production Foundation —
1
So storyboarding is often a thing that people want to skip obviously much like the last
2
section but in AI video it’s kind of crucial or at least will save you so much time and
3
it’s such a good practice to do this. Basically a storyboard in AI video is making a skeleton
4
of your story all together to then fill in with more images in the middle which we animate
5
the whole thing to make our video. So if I bring up the slide right here let me explain
6
more. Why should you do this? One, a clear visual structure. A storyboard helps organize
7
the flow of your story showing each scene’s purpose and how they connect making it easier
8
to visualize the entire project. There may be mistakes, gaps with your script or something
9
that’s wrong with the visual that you thought was going to be okay in images that actually
10
isn’t but unless you put this down on your timeline it’s very hard to see. So efficient
11
image planning is the next one. With a storyboard outline you can see exactly where images are
12
needed allowing you to focus on creating specific visuals to fill in the gaps rather than generating
13
unnecessary content saving you time, money if you’re on a credit base but also if we
14
put our whole story down to 12 say images and the whole thing needs 24, 30 images or
15
something we’re going to animate then I know that in between those I’ve got to add one
16
or two images and I’ve pretty much made my entire AI video. So this really really helps.
17
This is more like than a storyboard it’s more like the first part of production. It’s also
18
a foundation for animation. Once the storyboard is complete you’ll have a full sequence of
19
still images ready to be animated in the next phase streamlining the transition from static
20
visuals to dynamic scenes. We are just going to take these images and we will animate them.
21
These are the cornerstone for our entire story. So the storyboard flow if you like again on
22
the sheet here if we go from a shot list we’re going to have our shot list I’m going to show
23
you how to generate a shot list from chat GPT in a moment based on our script so we
24
can know pretty much every shot we need for a storyboard be very easy for us to now generate
25
these we haven’t got to go through and think what do I need AI can help us do that automatically.
26
From that we are going to generate our images we’re going to make the images for our storyboard.
27
I’m going to show you a little bit more in the next section we go really into depth using
28
mid journey and some other platforms for generating images. But we’re going to make these I’m
29
going to arrange these in order and we’re going to present them and visualize them so
30
we can really see if this story is working or not. Then I’m going to use that as a base
31
for the next section. Obviously this is basically the skeleton that we’re going to fill in the
32
gaps in the next section. So yes this point is crucial. I really recommend you doing it.
33
It’s basically rather than a storyboard. It’s the first part of the production process.
34
If you haven’t done this when you get to the next stage you’re going to be a little bit
35
lost if you skip this over so don’t. So let’s go into the next lectures I’m going to show
36
you next actually let’s generate a list and work out what it is we need for our storyboard.
— Identifying Key Visuals for Your AI Video —
1
Now, the first stage in building our storyboard,
2
which as you now probably realize, isn’t so
3
much a storyboard as it is a framework
4
and skeleton.
5
The first stage of us actually producing our
6
video here is to work out, well, what
7
shots do I need?
8
I have my script here just like you
9
probably do, and I could read through this
10
manually and go, okay, I need a shot
11
of this and a shot of that and
12
a shot like that.
13
That would make sense, but this is an
14
AI course and we’re in the age of
15
AI, so I can say to chat GPT,
16
hey, can you do that for me?
17
So let’s ask it.
18
I’m wanting to generate a storyboard of 10
19
to 12 images to tell this story.
20
Please, I’m so polite with AI still.
21
Please generate a list of the 10 to
22
12 images I would need and a prompt
23
to use in mid-journey for each image.
24
This is the script.
25
I’m going to just paste that script in
26
here.
27
It would know it if I’d asked it
28
to, because we’ve been in the conversation with
29
chat GPT, but perhaps you’ve been using another
30
tool.
31
So I’m just going to paste that into
32
here and I’m going to wait.
33
Okay.
34
It says, here’s a storyboard breakdown for 10
35
to 12 prompts to help convey the story
36
and tone and setting for mid-journey.
37
So the opening shot, split screen.
38
This is the split screen one, Pearl Harbor
39
and Hoshima at dawn, a peaceful American 1940s
40
naval harbor on one side, a serene Japanese
41
cityscape with traditional rooftops on the other, but
42
under a soft morning light.
43
So I’m not sure I’m going to have
44
the split screen.
45
Maybe I will.
46
It might be quite nice, but I’m going
47
to definitely use these prompts individually for that.
48
Okay.
49
So one, a, an American home interior, warm
50
morning, London, girls name, Amy, six, eights.
51
It’s on a wooden table.
52
Crayons.
53
Yep.
54
We know that one.
55
Then I need another shot, a closeup of
56
the hands drawing the crayons, telling the story
57
and we can see what it is they’re
58
drawing.
59
And there’s a Navy hat resting on the
60
table.
61
Okay.
62
And then I need the opposite in Japan.
63
I need Amy doing the exact same thing,
64
but in Japan, and then a closeup of
65
her hands being mirrored here.
66
Now this thinks this whole movie is a
67
mirror split screen.
68
So it’s done a B, B like this.
69
But I may not decide to do that,
70
but it’s great to have this.
71
So then to a, Amy, a young girl,
72
Pearl Harbor, glances over my drawing with a
73
soft, curious suspicion.
74
Morning light cast in a warm glow.
75
Her father’s Navy hat, uh, sitting slightly in
76
the breeze, a young girl, Amy gazing through
77
the window.
78
Uh, her face is serene yet sensing something
79
unusual.
80
The morning light illuminating the quiet city rooftops.
81
And then we’ve got Pearl Harbor interior.
82
Amy stands slowly looking towards the window of
83
concern.
84
Um, a drawing lifts slightly navel hat lifts.
85
Okay.
86
Hiroshima.
87
The room feels of intense white light.
88
Amy stands.
89
Uh, her father’s glasses slip from the windows.
90
And then the final scenes transitioning from a
91
single frame side-by-side.
92
Okay.
93
And then a serene day, the hues of
94
orange gently illuminating the figures, they go stand
95
in solemnly newfound awareness embodying beneath the sky
96
if ever changed.
97
So you can see there is missing here,
98
um, their two fathers kind of saying goodbye
99
to them for the day.
100
I might add that shotting here.
101
You don’t need it to tell the story.
102
That’s why it’s not here.
103
We can do it in the next section.
104
But this is how we get each one
105
of our shots.
106
So I have a one, two, three, yet
107
these all individuals.
108
So I’ve got 11 shots here.
109
I might do 13.
110
I might take away one of these.
111
Um, so I’m going to have 10 or
112
12 shots.
113
We’re going to generate those inside mid journey
114
next.
115
And then we’re going to display them.
116
We’re going to have the skeleton framework for
117
our story.
118
This exciting.
119
So don’t do this manually, but do be
120
aware of your script.
121
Of course, I’m aware.
122
I know what the story is, so I
123
know if there’s something missing in case there’s
124
a mistake or I can think of something
125
better.
126
Our mind as creatives are still better than
127
AI, but this takes out a lot of
128
the manual work for you, obviously, and that’s
129
what we’re here for.
130
So let’s move on onto mid journey next
131
and let’s create these shots to start creating
132
our video.
— AI Image Generation for Storyboards —
1
This lecture, we’re going to follow on from
2
the last and I’m going to actually generate
3
the images for this.
4
Now this is basically the course project that
5
I’ve been working on that you followed me
6
through all the way through from the beginning
7
when I generated an idea, script, we outlined
8
it and everything all the way through to
9
my style guide in the last lecture.
10
And now we have this, the storyboard.
11
So I’m going to simultaneously, you’re going to
12
see me generate for my specific film for
13
the course project and just generally how I
14
do it.
15
Now it’s almost a real time lecture as
16
I generate images.
17
I will edit this together to be slightly
18
shorter than real time.
19
You won’t sit and wait for me to
20
load each image.
21
Obviously I’ll make it more watchable than that.
22
But if you’re not interested in me going
23
through because you want to learn in the
24
next section how we actually use here, how
25
we use mid journey for this, then you
26
can just skip through to the end of
27
this lecture and you’ll see my results for
28
this.
29
And in the next lecture in this section,
30
we will be putting this all together so
31
you’ll see it displayed nicely and clearly.
32
Without further ado, I’ll get on with doing
33
this.
34
Now we already have, of course, from the
35
last section in this course, if I just
36
grab it.
37
So we have these shots, don’t we, of
38
Amy and the scenes that we were setting,
39
both Amy’s and their fathers.
40
So they are going to make up part
41
of our storyboard.
42
So we already have about 1, 2, 3,
43
4 or so shots out of the 12
44
that we need already.
45
So let’s follow through with this and start
46
just from the beginning.
47
So first we need the split screen Pearl
48
Harbor.
49
I don’t need to put this in here.
50
I’m going to say scene showing Pearl Harbor
51
like this morning lit.
52
So I’m just going to copy this over
53
onto here.
54
Now I’m going to run that just as
55
it is.
56
And I’m also going to run it with
57
the front.
58
I’m going to say establishing shot.
59
And then I’m also going to say, as
60
we know from the last one, Technicolor 1940s
61
and run that.
62
And let’s see the results.
63
Oh, I’ve just seen that I made a
64
mistake here.
65
This is why we’re going to get some
66
funny results.
67
I’ve got Pearl Harbor and sometimes a Japanese
68
feel looking.
69
But these are nice, aren’t they?
70
The Technicolor type of look.
71
Because inside here, let me just use this.
72
Scene showing Pearl Harbor because it was a
73
split screen at dawn.
74
A peaceful American 1940s neighbor harbor.
75
And then I don’t need that right there.
76
But I do need to see houses.
77
Scene showing Pearl Harbor at dawn, a peaceful
78
American 1940s neighbor with many houses, rooftops showing
79
under the soft morning light.
80
Let’s run with that one.
81
But sometimes I do like the mistakes that
82
come up here because you could actually…
83
This is still quite cool.
84
The effect it’s having.
85
Like this color here, Technicolor.
86
Amazing.
87
Let’s take a look at this.
88
This is dawn.
89
Looks a little bit too early.
90
This one looks a little bit better, almost
91
like military style houses you can see here.
92
And there’s definitely a naval base.
93
This one, you can just about see it
94
here.
95
You can see…
96
Okay, I want to run some versions of
97
this one and I’m going to run some
98
versions of this one.
99
Let’s have a little look.
100
That’s nice there with the birds in the
101
foreground on the telegraph pole right here.
102
I’m not sure if I can see the
103
houses where Amy might live though, but she
104
might be in one of these.
105
Let’s keep looking at some of these.
106
I’m just going to try one more prompt.
107
Technicolor 1940s scene showing small town houses neighborhood
108
with neighborhood with Pearl Harbor in the near
109
distance at dawn.
110
Let’s play with that.
111
See how that comes.
112
Okay.
113
Let’s have a look what I got here.
114
Okay.
115
This looks quite nice here.
116
Let me just actually zoom that down a
117
little bit because I want some more of
118
Pearl Harbor in shot.
119
Let’s submit and see how that one comes
120
out, but I do like the Technicolor that
121
it’s got there.
122
Okay.
123
This is quite nice.
124
The last thing I’m going to do is
125
I’m actually going to just bring this in
126
a tiny bit here because this is going
127
to be my establishing shot where I’m going
128
to have a small amount of movement as
129
this zooms in.
130
So I just want a little bit more
131
here.
132
And then in AI, when we use our
133
video, I’m going to make the camera just
134
go like this over the top of Pearl
135
Harbor like that with a text coming up
136
on screen.
137
So this should be the final edit that
138
I want right here.
139
So I’ve just been playing with this.
140
This is my establishing shot of Pearl Harbor.
141
And yes, it doesn’t have to be historically
142
accurate.
143
You’ve got somewhat of the representation of what
144
it is.
145
We’re not making a documentary.
146
This is a movie after all.
147
So as long as it represents what you
148
want it to represent, then I think that’s
149
fine personally.
150
So here we have the options where I’ve
151
zoomed out.
152
Yeah, that’s fine.
153
Let’s upscale that.
154
So we have a final image and let’s
155
start making our version for Amy where she
156
lives.
157
So I’m taking the same prompt and I
158
want to have so it’s on the same
159
image, Technicolor 1940s.
160
Now I’ll show you I could be using,
161
for example, I could be using the in
162
this image if I really like this and
163
I like the style or I’ll show you
164
this one while I was still loading.
165
I could be using the style and say
166
I make sure I want this in this
167
style to start matching up.
168
And that’s something we can do.
169
And we can talk about more in the
170
next when I show you in the next
171
section how to use mid journey.
172
But let’s just run this or I should
173
have run a couple of those.
174
Let’s run a couple more to and now
175
I’m going to get 12 responses.
176
OK, this generated some nice images here.
177
Let me have a look at this or
178
and it’s still in that really nice Technicolor
179
version.
180
OK.
181
Oh, this is nice.
182
You can see the wooden houses that could
183
be one of hers for sure.
184
Let’s go over here or and that’s got
185
the nice landscape on it.
186
OK.
187
Let me.
188
I quite like this one, actually.
189
So let me just get some versions of
190
this, a subtle one and a strong variation
191
of it.
192
Was there anything else that I really would
193
like the look of?
194
I mean, I do like this with these
195
rooftops is we’re going to fly over the
196
top of this.
197
Maybe I’ll get a subtle and a strong
198
version.
199
Let’s see if I can get Amy’s establishing
200
shot for Japan out of these.
201
OK, let’s flick through some of these images
202
again.
203
This rooftop bit right here is really nice
204
and again, somewhat of a cityscape, which I
205
wanted as well as these houses.
206
Let’s take a little look through these.
207
That’s quite nice with an industrial feel to
208
it.
209
And these are the rooftop versions that I
210
like the look of.
211
OK, let’s keep going.
212
I think these wider ones are nicer like
213
here.
214
So I’m going to choose one of these.
215
I think I like this one.
216
And this could be Amy’s house with the
217
window open right here.
218
We’ve got a bit of a cityscape right
219
here, some more industrial with the mountains.
220
That’s really nice.
221
Let’s upscale that.
222
OK.
223
So, so far, we’ve got our two images.
224
If I go back here quickly, we’ve got
225
our image of Pearl Harbor.
226
If we open up with that and have
227
the titles of Pearl Harbor.
228
And then we’ve got our image right here,
229
which is of Amy’s home too, sorry, inside
230
this one.
231
The next one, we’re going to have an
232
interior of an American home, warm sunlight, girl’s
233
name, Amy sits on a wooden table with
234
crayons, Navy ships are visible through the window
235
behind her.
236
So I want to get this image because
237
the image that we do have that we
238
already downloaded of Amy, we don’t see those
239
naval ships.
240
It’d be quite nice to have an image
241
almost from the back or the side of
242
her with the ships out the window.
243
And then I also want to do the
244
same for Japanese Amy also before we go
245
to that shot here that we already have
246
of her that establishes her coloring in.
247
So let’s work on those two shots.
248
OK, I didn’t want to bore you with
249
going through every single rendition here.
250
So I started with using in the style
251
of the image we’ve already generated still of
252
this girl from behind a wooden table in
253
a living room in the 1940s USA.
254
She’s coloring in, but we see her only
255
from the side and behind.
256
We do not see her face.
257
Next to her is a large window outside.
258
We see Pearl Harbor, lots of naval ships
259
in the distance.
260
So it was giving me lots of different
261
stuff here.
262
I added then, of course, Technicolor and 1940.
263
I was getting a lot of black and
264
white things, and then we started getting some
265
nice stuff like this.
266
Any one of these shots, this one especially
267
be quite nice.
268
I like this one because of the symmetry
269
behind it.
270
And yes, this has warships out of it,
271
but they’re so in focus.
272
I like this one slightly less in focus.
273
Perhaps this is slightly more 1940s.
274
We’ve having that depth here.
275
What I might do is actually just run
276
this just for comparison.
277
I just want to run that and submit
278
it while I talk you through these.
279
So I ran some variations of this, which
280
was great.
281
And then I chose the one I liked,
282
which was this one.
283
And then I want to make it wider.
284
So this one is giving me a door
285
that has given me a chair that’s giving
286
me cinematic with a bit of a chair
287
quite like this.
288
And here.
289
So then I had to make her dress
290
to match.
291
If we go back to the other one,
292
she’s got a lighter colored dress, which might
293
change.
294
But I think a white dress for the
295
connotation, especially in this is nicer.
296
So I changed her dress.
297
I will probably go for one of these.
298
Maybe that simple one.
299
Maybe like one.
300
It doesn’t really matter.
301
One of these.
302
They’re all pretty good.
303
These.
304
Yeah, really nice.
305
So I just want to have a look
306
at those ones.
307
We’ve just been generating here.
308
This one’s a bit dark and moody.
309
That’s also quite dark.
310
This one.
311
I think I like this.
312
And which dress do I like?
313
I want to match my other shot that
314
I already have of her, which she has
315
sleeves that are a little bit on the
316
puppy side like that.
317
So let’s see more like this one.
318
I think.
319
Yeah.
320
I like this backing of the chair.
321
OK, let’s up res that one and let’s
322
do the same thing.
323
I’ll go ahead and do the same thing
324
now for Amy in Japan.
325
OK, I think I’ve got my image here
326
of Amy in Japan.
327
And let me show you my process rather
328
than you stay here the whole time while
329
I work through this.
330
So I did the same exact prompt I
331
did for the other Amy Technicolor 1940s still
332
of this girl from behind at a wooden
333
table 1940s traditional Japanese house.
334
She’s coming in, but we see only from
335
the side or behind.
336
We do not see her face.
337
Next to her is a large open window
338
outside the window.
339
We see Japanese 1940s city and town.
340
So at first it was generating not much
341
with the Technicolor feel, maybe a slightly there.
342
And then we got into it a little
343
bit here, but here’s a bit more illustration
344
look.
345
And I was using either the person, Amy
346
herself, the image style.
347
I’ll go over all of that next section
348
here, getting a little bit more, but she’s
349
facing the camera.
350
So I liked I kept going along here
351
and generating this until I went, let’s just
352
do one without any of the image prompts
353
in style or anything else.
354
And I got the image that I liked
355
about using that.
356
I liked this layout here, seeing her from
357
the side, but she’s got the wrong hair
358
and everything else.
359
So I generated it and we changed the
360
background slightly.
361
I just saw just doing a quick one
362
to see if I could get something else.
363
Then I went, I need her pink because
364
if we look at the original image, she’s
365
wearing a pink top.
366
And then that took some time to get
367
right until we eventually got it.
368
And then I said, oh, she’s got the
369
wrong hairstyle.
370
So I had to give some prompts like
371
girls, Bob hairstyle after I changed it.
372
Because if you see a hairstyle is different,
373
I got some very strange results.
374
Then eventually I have this one right here
375
of Amy at the table coloring in.
376
So we can work out both of these
377
shots.
378
If I put them all up right here,
379
I’ve got these establishing shots, one, two, I
380
have the side shots of the girls, one,
381
two.
382
And then I also am going to have
383
the original images that we have.
384
One, two.
385
So we’re really setting the scene right here.
386
What I need after this one is them
387
coloring in and maybe the father’s naval hat,
388
which is something I would then change with
389
the image and put it into here.
390
But I might just have it so that
391
I can have a drawing of her father
392
and a ship or something.
393
So I’m going to use some of this
394
prompting here, close up.
395
I’m going to say first Technicolor 1940s
396
camera angle from above.
397
Girl, child, small hands joined with crayons, creating
398
a family portrait with a large boat.
399
Wooden table she is working on.
400
Let’s play with this and see what comes
401
up.
402
And then I’ll show you at the end.
403
I’ll keep reworking, reworking, and I’ll show you
404
the results for both Amy and Amy at
405
the end of this.
406
So now the next shots of both Amy’s
407
I have for you here.
408
I use Technicolor 1940s camera angle.
409
Girl holds hands looking from above wooden table,
410
etc.
411
Given these and yeah, the coloring is not
412
too bad.
413
This one I like the most and we’re
414
working on this.
415
Of course, you had yellow sleeves in our
416
photo here.
417
I’ll show you of American Amy.
418
She is sleeveless and a dress.
419
So we remove that and then we kept
420
working on different images until I got her
421
drawing a boat that was not correct.
422
Drawing a boat, a ship like this.
423
And this is the final image.
424
I may change this image eventually, have her
425
and her father.
426
Make sure with AR you count fingers, make
427
sure everything’s OK.
428
And then the same with the image with
429
Amy, except I use the word traditional Japanese
430
in here to see if it gave a
431
different feel.
432
And this was my favorite one, including in
433
the grade.
434
So I kept working it and working it.
435
Oh, I did a little bit more on
436
the other one.
437
And I liked I like this one and
438
this one.
439
This one a little bit more quite cinematic.
440
So I kept working on that, seeing if
441
I could change the image on here.
442
It’s something I will probably do in Photoshop
443
and I’ll show you how to do that
444
in the next section to add the image
445
that I want in here.
446
Here’s the final one for Amy in America.
447
I was getting some funky results.
448
So this was the final one for that.
449
And I’ll probably flip this around and probably
450
change the image when we come to the
451
next one.
452
Next section I can show you in Photoshop.
453
And then here is Amy.
454
I quite like this shot also.
455
So I might just upgrade this and keep
456
them both and I can decide in my
457
next section.
458
So we’re getting ourselves a little story come
459
together now.
460
So now I need the both their fathers
461
who come in to greet them.
462
We have that shot already.
463
If I can show you on here, they
464
come in and they leave for work and
465
then probably have Amy continue to color in
466
fade to black as if some time passes
467
and then Amy is back again.
468
Her expression changes and the light and everything
469
changes.
470
Amy looking out the window.
471
What I want to have rather than generate
472
this again is have Amy looking out the
473
window.
474
Both Amy’s looking out the window and then
475
an explosion and maybe close up of their
476
face looking dramatic.
477
So let’s do them looking out the windows
478
respectively.
479
Close up of their face and expression darkness
480
and then the explosion I might do actually
481
just to show that on here.
482
So let’s do that first one here.
483
I want Amy looking out of the windows.
484
I’ll get playing with that and I’ll come
485
back to you and show you when I’m
486
done.
487
So I think I’ve got my two shots
488
here that I wanted to create.
489
Technicolor 1940.
490
This girl is looking at a traditional Japanese
491
window over the Japanese city.
492
We see her from behind.
493
So I got all of these shots.
494
I started to like one like this and
495
I kept playing with it, getting more and
496
more.
497
But I like this big open window.
498
There’s obviously some changes I need to make,
499
but I quite like this yellowness reminds me
500
if I’m looking back on our style right
501
here like that, Douglas Sirk is a director
502
that I love.
503
That’s 1950s, but 1940s stuff too has that
504
feel with these different punch of colors.
505
You see these like moves, purples and alongside
506
these yellows.
507
So I kept having to go through this
508
and of course she needs a black kimono.
509
So this one is good.
510
And then I have changed it to this.
511
I’ve opened the window slightly more.
512
Also show you all about this, obviously the
513
next section.
514
Then I ran exactly the same thing.
515
Technicolor 1940s girl looks out over Pearl Harbor,
516
many neighborships from behind.
517
And this game is a really nice different
518
images.
519
I did wonder about whether having the window
520
open like this.
521
This is actually really nice.
522
I might actually just play with this right
523
now.
524
I just want to edit this and just
525
go white 1940s dress here that she has.
526
Actually, it’s like an off white.
527
I can’t decide between that.
528
And then I kept playing and I really
529
like this shot.
530
She looks over and it has a really
531
nice feel to it too.
532
Again, she had to be in a white
533
dress.
534
So I had this one right here.
535
This is my shot that I liked.
536
So I’m going to actually download that store
537
that one.
538
Also, I’ve just been playing.
539
You saw there.
540
That’s not the dress that I wanted, but
541
I could keep playing with this.
542
Although now I look at it, I like
543
this shot a little bit better, I think.
544
Yeah, so I have both my shots for
545
that.
546
Now I want an extreme close up of
547
their faces and a bit of a shock
548
look.
549
All right.
550
I think I’ve done what I wanted to
551
do for these.
552
Let me show you 1940s technicolor close up
553
of this girl’s face.
554
I’m using as a reference.
555
Horror scared.
556
She looks out of a window and I
557
eventually change that out of a distance because
558
I’m getting this side shot.
559
And what I want is an extreme close
560
up like this.
561
So I reworked this image slightly.
562
And in the meantime, I got this image
563
that I really love, which is a really
564
nice looking image with the feel of it.
565
I might use that for another shot somewhere
566
else in the video.
567
So I’m just going to.
568
I just downloaded it.
569
I uploaded it and kept it.
570
And this is what I was starting to
571
get.
572
Yes.
573
Imagine her looking out the window, sees explosion.
574
And this is her reaction that we cut
575
back to.
576
Nice.
577
So I’m working with this and this.
578
These two shots right here.
579
Obviously, I would change that to a pink
580
kimono.
581
Maybe I could do that right now while
582
I’m on recording to you.
583
Let’s do that.
584
OK, done that.
585
And then for the other girl, for the
586
other Amy, I started getting these images playing
587
with this very Dorothy Wizard of Oz kind
588
of feel to it.
589
And another one here.
590
So I went with this and this that
591
I really, really like both of these.
592
They’re really fantastic images showing lots of emotion.
593
I just actually need to do the same
594
thing where I just need to change her
595
dress to off white.
596
All right, great.
597
They’re just up resing here.
598
I have her in off white, although it’s
599
kind of purple with the color that we’ve
600
got happening here.
601
I’ll raise those two and then the other
602
image I wanted to keep this one and
603
then keep this one for later.
604
So the only other thing I want to
605
do to tell this story and then we’ll
606
add in all the other is actually I’m
607
going to show the explosion on the city
608
of Hiroshima and also the attack on Pearl
609
Harbor.
610
Some explosions there, too.
611
So this is almost like she goes to
612
the window.
613
You see the explosion.
614
Come back to this.
615
Come back to this shot.
616
All righty.
617
So I started asking for 1940 Technicolor wide
618
view of the harbor homes, Pearl Harbor, many
619
naval ships, planes dropping bombs, as well as
620
a wide shot view, distant, huge bomb drop
621
town city, Japan.
622
And I was playing with lots of different
623
types until eventually I just had to say
624
rather than use Hiroshima at all, a Japanese
625
town rooftops, huge mushroom cloud bomb.
626
And I started developing that as well as
627
inside the Pearl Harbor.
628
I had to do wide view from the
629
ground of the harbor and homes, Pearl Harbor,
630
many naval ships and planes dropping bombs, explosions,
631
Japanese airplanes until eventually I’ve settled on and
632
working back and forward and back and forward.
633
This is the view from her window.
634
You can imagine the explosion going off the
635
whole scene filling with orange and then like
636
filling her face with that color.
637
Also download that.
638
And then I was working Pearl Harbor and
639
I couldn’t decide on two.
640
So I’ve got this one right here as
641
well as this one right here.
642
We’re still keeping at 1940 Technicolor feel and
643
I’ll decide which one when we get to
644
the next stage.
645
So now I’ve got all of my shots
646
on here.
647
I can just put them on screen one
648
at a time.
649
I’ve got both the establishing shots of the
650
USA and Japan.
651
I’ve got the establishing shots of Amy from
652
behind USA and Japan.
653
I’ve got the front shots of Amy at
654
USA, Japan, shots of Amy and her dad,
655
USA, Japan.
656
And then we’ve got Amy looking out the
657
window, USA, Japan, these shots of Pearl Harbor
658
and of Japan explosion and then the faces
659
of Amy, USA and Japan.
660
But let’s put these out into a nice
661
display.
662
Let’s move on to the next section and
663
then start seeing if our story starts making
664
sense or if we’re missing anything.
— Storyboarder.ai: AI Storyboarding Made Simple —
1
Now the next tool I want to show
2
you is Storyboarder.ai. This is a really
3
good tool.
4
I’m actually really impressed and it’s getting better
5
and better the more I look into use
6
and play with this tool.
7
It’s where you can automatically by uploading your
8
script or even just a couple of sentences
9
that you have, generate a storyboard for your
10
story in all different styles.
11
You can then either change those images, you
12
can upload your own images to reference them,
13
you can edit inside and even turn those
14
into videos and then export the video onto
15
your editing software.
16
So you can actually do everything inside here
17
if you wanted to.
18
Slightly less control than if you were making
19
your images with mid journey and doing it
20
this way.
21
So I don’t use this directly, I make
22
it myself, but I would definitely if I
23
was coming into this without a lot of
24
design experience especially or wanted to save time.
25
This is an amazing tool and yes there
26
is a cost for it, I can show
27
you those after.
28
I’m on a free trial version right now.
29
Actually let me just show you that.
30
So I’m on the free trial to show
31
you what you will have and what it
32
will look like.
33
For $49 a month you can get up
34
to 5 projects and then you can do
35
$100 or $300.
36
So it can be pricier depending on how
37
many you’re going to do.
38
5 projects, that’s $10 a project to be
39
using this.
40
Let me show you the tool and then
41
you can decide.
42
So let me just start a new project
43
exactly like you do.
44
Now I have a script already, don’t I?
45
But I don’t have it in a PDF
46
and also it’s not in the exact same
47
format.
48
Let’s just do by concept and I can
49
paste it in.
50
So let’s get the title from this.
51
This was Amy Under The Changing Sky, that
52
will probably change anyway.
53
Let’s do this, the genre of this.
54
I would probably put it down to drama,
55
probably most of it.
56
And then let’s go next.
57
Or start typing your idea.
58
So let’s do this, opening shot split screen.
59
So this is with all the split screens
60
still.
61
I might as well show you this example
62
because we’ve already generated this together, you know
63
what this looks like.
64
Here’s the ending and there.
65
Let’s copy that and let’s go to storyboarder.
66
Let’s paste that in.
67
Let’s go next.
68
Now choose your storyboard art style.
69
Now this doesn’t matter too much because you
70
can change that after and I could easily
71
change it from one to the other.
72
Regenerate the story you’ll see me do or
73
change it.
74
But you’ve got photorealistic right here.
75
Let’s just go with sketch at first.
76
I like this one, this is how you
77
would have it if you were creating it.
78
And let’s add to project.
79
Okay, it just said start project.
80
Your script is cooking in the background.
81
Let it finish then take a peek.
82
Sure, no problem.
83
Let’s take this.
84
So what it’s done is it’s formatted it.
85
Do you remember on site I was showing
86
you here under if I go to, remember
87
under AI video script and I showed you
88
right here there were BBC, British Broadcast Society,
89
different ones and I could show you actually
90
if I upload this and this is the
91
format that it’s in.
92
You see with exterior, location, the format of
93
a script.
94
Well, that’s what storyboard is doing.
95
You see interior, Amy’s house, kitchen, morning.
96
Now this isn’t correct.
97
It doesn’t matter for this example because I’m
98
going to show you how to use this,
99
but I could go in and I could
100
go into edit mode and I could then
101
change any of this.
102
For example, this is in her living room.
103
So I can change that and I can
104
read this through.
105
Amy’s warm lit, carefully colored, drawing her father,
106
make the most of the day, Amy.
107
Amy’s living room in the morning.
108
Okay.
109
Okay.
110
So you’d go through this and make sure
111
that was right, especially with these locations and
112
anything else.
113
But for the sake of this, sake of
114
this example, it doesn’t matter too much.
115
I’m going to show you how the software
116
uses.
117
You’d go in and change your script right
118
here.
119
So now that’s done, it’s going to generate
120
a shot list.
121
It’s almost like the automatic version of what
122
we were doing with chat GPT when we
123
asked it a couple of lectures ago to
124
generate us a shot list for our storyboard.
125
So it’s just going to start generating this.
126
Okay.
127
Continue.
128
Your shot list is brewing in the background.
129
Create character consistency.
130
Storyboarder is a mind reader, sort of automatically
131
spots your characters and underlines them in blue.
132
You can see this Jake Gyllenhaal kind of
133
a character right here.
134
So it’s underlined them to make sure we
135
have consistency through character, which is amazing.
136
So if I go through the shop list,
137
let’s do a warm, softly lit room with
138
the morning sun filtering through the windows.
139
Amy’s sitting at a wooden table, crayons out,
140
Navy ships visible through.
141
Close up Amy’s hands drawing a Navy hat
142
wrestling on the edge.
143
Amy’s father, Navy uniform standing next to her.
144
All right.
145
That’s perfect.
146
That looks good.
147
And then I can see, here’s the shot
148
side.
149
That’s a wide shot.
150
Here’s a medium shot.
151
This way you can change this after it’s
152
generated.
153
Don’t worry.
154
But I could go, that’s a long shot.
155
Here’s a closeup of Amy’s hands.
156
Navy hat wrestling.
157
That’s a close up father in a Naval
158
uniform, a medium shot.
159
If I wanted these to be any other
160
kind of shot, I could click and just
161
change that.
162
Of course.
163
Like if I go back to this, it’s
164
a wide one.
165
Let me just change that slightly to establishing
166
shot.
167
Great.
168
Let’s take this, what we have here and
169
let’s storyboard it.
170
And you can see what this looked like
171
and we can see how to change this.
172
Okay.
173
So that’s just generated part of this.
174
Let me show you what this looks like.
175
So you can see I’m coming here.
176
It’s shot one, shot two, shot three with
177
a little detail underneath it and any characters
178
in blue.
179
Here’s the shots here, the Naval hat, her
180
and her father.
181
And then it’s just generating more and more.
182
Obviously this is just a free trial version
183
right here.
184
So what you can do right here is
185
I could say, oh, that’s not actually what
186
I wanted it like.
187
Let’s retry that.
188
And it will just retry the shot and
189
it will regenerate another image.
190
Here we are right there.
191
And I can go back if I wanted
192
to and see, uh, actually no, that is
193
the shot that I wanted.
194
Now, Amy sat here with two people, uh,
195
for some way.
196
So let’s just retry this and let’s see
197
what that happens.
198
If not, I’m going to show you how
199
the edit feature works.
200
Okay.
201
Here’s shot one, here’s shot two.
202
I actually like shot one better.
203
Let’s do editing.
204
If I come over to the in paint,
205
I can in paint things right here or
206
the eraser tool, which is what I want
207
right there.
208
So I’m just going to erase this second
209
person for the sake of this shot.
210
Not needed.
211
Let’s do that.
212
Okay.
213
Erase let’s in pain, in pain, in paint,
214
in paint.
215
Okay.
216
So now I’ve edited this.
217
So this is a closeup of what this
218
is and all the details on here.
219
I like this.
220
I can add arrows on here.
221
Like saying, going to the next shot here,
222
let’s just close that for now.
223
And now it doesn’t matter too much about
224
this because we’re going to animate this or
225
change the style in a moment.
226
Uh, naval ships in the background.
227
Okay.
228
Hands drawing hat.
229
And then I could also edit this if
230
I wanted to, which was great.
231
So I can also have less actually while
232
we’re here, let’s get variation and show you
233
what this tool does there.
234
Now what this is almost a bit like
235
we saw me using mid journey and it’s
236
almost like going, can you do me another
237
one, but just a slight change, retry will
238
change the image almost completely.
239
It’s layout.
240
Whereas variation will give you just a slight
241
variation.
242
We’ve gone from there to there, not much
243
of a difference at all, but just that
244
small little tweak.
245
Now, if I wanted to, I could use
246
image to image.
247
When I click this, I could upload a
248
single image.
249
For example, if I already had one, I’ve
250
already seen on Google a picture or something
251
through a I liked here or the setup
252
of this or that symmetrical shot that we
253
had for mid journey.
254
You could be importing this.
255
For example, if I go image to image,
256
let me just click to upload this.
257
We know we’ve been organizing this already.
258
So let’s go into our, uh, steals.
259
Let me find that shot of Amy where
260
she was sat.
261
It’s quite symmetrical painting.
262
Yep.
263
Let me add that in here.
264
Let’s upload that.
265
And it’s just generating the next version of
266
this.
267
Okay.
268
And now it’s generated his own version of
269
that, which obviously we had a lot less
270
control than when we were using mid journey.
271
And this is obviously in photo realism.
272
I could change the style on it and
273
stuff.
274
If I wanted to, I’ll just click to
275
retry to get this and you can keep
276
playing and playing and playing there.
277
Here we go.
278
This is back to that sketch kind of
279
style, but we’ve got her more symmetrical in
280
shot, although she looks a bit older right
281
there.
282
So I might just keep playing with that
283
and playing with that.
284
Not a problem.
285
Now we’ve got this and we’ve got our
286
storyboard for the sake of this, uh, for
287
the sake of this example, as those six
288
shots here, let’s go to animatic, which is
289
the next stage where we can start pretty
290
much animating this.
291
Okay.
292
That’s generated.
293
Let me show you what this look like.
294
We can see the two versions that it’s
295
done.
296
I like this one.
297
Uh, so it’s generated video.
298
Let me just go onto here.
299
I can play what this looks like, and
300
it’s a still shot.
301
So not that exciting to show you it
302
was establishing and now it’s playing the next,
303
again, no movement then onto this shot.
304
Okay.
305
Then onto the next shot where she’s coloring
306
in, we could have edited this.
307
Of course, I don’t know why it looks
308
like a claw, uh, the hat and then
309
onto the shots that it didn’t generate for
310
us.
311
Okay.
312
Pause.
313
Now I could go back and I could
314
just start changing this, but if I wanted
315
to, I could change this into, let’s change
316
it.
317
The photo realistic and take a look at
318
that.
319
Save changes.
320
Let’s let that generate.
321
Now it’s done.
322
Let’s hit reach on all of these.
323
Retry, retry, retry.
324
Yes.
325
Yes.
326
And yes.
327
Okay.
328
It’s generating those in realistic style.
329
Is it doing this one?
330
Yeah.
331
Okay.
332
What do I think of storyboard AI?
333
Incredible that it takes your script, then it
334
formats it as a script.
335
Then it makes a shot list, which you’ve
336
been doing manually anyway.
337
And then it makes a rough storyboard.
338
If you don’t have any ideas of what
339
your shots could or should look like, then
340
absolutely you could use this.
341
But we know with chat GPT, it was
342
describing what the shots look like and even
343
made us a prompt to do this.
344
So if you’ve got absolutely no idea, then
345
use this or just play with it for
346
fun to get your initial idea.
347
Some people want to use this.
348
I don’t personally, I do it the way
349
I’m showing you in this course, but I
350
want to give you every option.
351
So if you want to use storyboard AI,
352
here it is.
353
And that’s just a rough overview.
354
You can also, they’ve got loads of tutorials.
355
You can go and watch more on there,
356
but I thought I’d give you an overview
357
if you want to try this.
358
Then let me show you quickly about some
359
file organization stuff.
360
Now your project’s getting proper.
361
And then we will go into display this
362
and we can see what my storyboard, my
363
version of this looks like using mid journey
364
and chat GPT.
— Keeping Your AI Video Project Organized —
1
I just want to take this moment for a second whilst we are here to talk to you about file
2
management because it’s going to start getting important now and not many people talk about
3
it when you’re starting these projects with AI, you’re going to have lots and lots of
4
up resing happen inside say mid journey and downloading of images, perhaps images that
5
you don’t need, so I’m just going to show you a quick hack that I use to store this
6
and how organisation is needed, how it’s integral. Now you’ve seen me use Google Drive, I’m
7
just going to show you how I use it and how I organise my files here on my desktop or
8
wherever you want to store it, on a drive or whatever, you could do the same thing inside
9
Google Drive or any other cloud storage if you wanted to. So the structure I typically
10
use is let’s say this is our course project. Now the main thing you’re going to have inside
11
here of course, is I’m going to have different categories in here. So I’m going to start
12
this off with stills. These are your images you’re going to have. And then we’re going
13
to convert these to videos. So once these have been converted, then they go into here.
14
So you will make more stills than you turn into video, I’m sure. And then you put all
15
your videos in here. You may also whilst you’re inside here be notifying which ones of these
16
are good and bad, which ones you want to use and which ones you don’t. I download everything
17
even if a video didn’t come out that great. Just in case I need I don’t know the first
18
one or two seconds of that 10 second animation that it made with AI and I don’t like the
19
rest of it, I might use that bit. So I download lots here. Okay, let’s do keep going here.
20
What I’ve got here is scripts. Now I download this and that’s going to be everything from
21
my scripting and my ideas and when I have shot breakdown. And what I put into this one
22
is my style guide and story board. Now inside here will be the actual finished product of
23
my storyboard that I have or I display out so I can see it like you saw us making inside
24
this section, but the actual stills will be in here the video and then I’m also going
25
to have the final now this will have all of my final exports when I put together my let’s
26
say my my final video, but it might not be the final one I might export watching go on
27
there’s a bit wrong with that clip. That’s fine. What I do is I number those. So if it’s
28
course project one, then course project 234, I might have 10 different videos in here and
29
they get better and better and more towards a final. Also, you may have different versions
30
if you’re making this 6991611 for Facebook, or perhaps Instagram, tick tock or for YouTube,
31
you may have different formats. So that’s how I do that. So let me just put some stuff
32
into here and you’re going to see how I’m organizing this. Okay, so inside say the stills,
33
for example, here’s all the images that we’ve downloaded so far for our storyboard and when
34
we were coming up with our ideas for our sets, etc. So it could be for example, if I take
35
this right here, let me bring that into size, it might be that actually, this isn’t right,
36
I want this in here, maybe I use it, maybe I will, but I’m not going to convert that
37
into video, I just mark these off, I mark that with a red, red, no, and the rest are
38
okay. I’ve got here these shots here that I might not use that at all. I might use that
39
one instead, I might use that one instead of the other shocking face that we have here,
40
it might be that I mark these down when I decide because I’ve got these two shots of Pearl Harbor
41
here. I’m like, oh, she looks out the window and sees this, or they should look out the window and
42
see this. So it might be that I mark this one down and I don’t like as much red, and then I’ll know
43
which shots I want to be using. I don’t need to make the decision twice or wait for when I’m
44
inside the edit. So now when I’m coming to do my editing, which reminds me of another bit right
45
here, I’m going to go and put edit in here, let me move that along. This is where I put my for me,
46
I’m editing with Adobe Premiere Pro. So I put them in here. So everything’s going to be neatly
47
organized when I have and you’ll see me in a moment put these down into my shots in the next
48
section when we’re doing it. When I have my edit, I save it into here. When I’m open my edit in
49
Final Cut, and I’ve got my project in that I’m doing, then I make sure all my stills come from
50
here that I’ve put in all my my videos are in here that I put into the project. I know where
51
everything is. And it’s super, super organized. I don’t need to go into my downloads and find this
52
I don’t need to re go into, into mid journey and get this back. It’s all super neat, organized,
53
you may have your own way of doing it. This is how I do it. So everything is in one place,
54
you can obviously do this inside your Google Drive, if you wanted to, you’ve seen me
55
already use and start to organize that. But I also download and I’d have it as a doc X or
56
however you want it and put my scripts in here, then the storyboard we’re going to organize in
57
a moment and display that out inside. I’m going to use Photoshop to display that. So you can do
58
that too, or however you want to do it. And you can keep your style guides in here. And the final
59
section I’m going to get right before the edit, if I come over to here is my audio. So audio,
60
of course, is important. And we actually do it if I go to, I’ll just organize this like this,
61
so that we go in order that we create this stuff in, I’m going to pop that in there. And I’m
62
actually going to grab my audio files and put them in there. So if I open this up, I’ve got in here,
63
remember, we were doing this, these are our songs that we started to do. And I’ve got a few more.
64
Yeah, a few more to get in there. And here’s some of the voiceover we were doing.
65
In the quiet hours of dawn, two young girls. And I would organize this like this is my speech
66
to speech is a good version. For example, if this was the best one out of all of them,
67
I just mark that green, I’m going to go with that one. And I put my audio into here. Now,
68
the only other part that you might want, and this completely depends on your product on if you need
69
graphics, perhaps you need graphics, if you’re doing a project for a company or something,
70
I put graphics and fonts. Now I’m probably only going to have fonts in this one, not actual images,
71
but we’ll see. And what I do here is I just drop in my em of the five fonts that we downloaded
72
just now they’re already on my machine. I’ve already imported them. But just so if for any
73
other reason, they’re not there, or I need any reminder about something, then I just grab those
74
and I just put them in here. And these are the five fonts that we’ve been working with.
75
You can see that it’s super organized. This is key key key. Honestly, the amount of times you
76
spend just a few seconds going around to find our ways that is that my downloads is that here,
77
all of those added up, this will save you hours and hours of work just doing this, it seems like
78
annoying at first, but is nothing compared to the time when you organize your project. So I have
79
this graphics fonts, this is background stuff. This is everything going from scripts, which we
80
need after the ideas that we had, I don’t put my ideas in their scripts style guide to make sure
81
I’m getting this out on my storyboard we’re doing today is my audio files have already done, then
82
we’re going to make all our steals, which we did most of them for our style for our storyboard
83
already there in there, but we’re going to have even more. And then I’m going to start marking
84
these off green and red that I want. And then the next stage, you’ll see me do this when I edit
85
is I probably start numbering these. So if I know I’ve got 30 shots, I start putting them in order.
86
So if I start with this shot, for example, we’ve got the establishing shots right here
87
of Japan, maybe this is shot number two, because I start with the American one, which looks
88
something like let me just find it here, like this, this might be shot number one, and it moves,
89
this might be shot number two. And we come into here and we have the fonts and graphics over it,
90
I’ll start doing that next. But obviously, most importantly, I’ll have them in video form
91
numbered to go on to my timeline when I edit. And here’s the edit project saved in there might have
92
more than one, so I might have it in different sizes, like we mentioned. And then when I export
93
them, they’re going to be exported in here. Super neat, organized file organization, very,
94
very important. Please, whichever system works for you just have a system. Okay, just to show you
95
that for this lecture. Alright, let’s move on and put this together. So you can see the storyboard
96
in one place and we can see if it works or not.
— Making Storyboards Work: What’s Missing? —
1
So we need to display our storyboard, don’t we? Now, you could do this any way you want to, you
2
could just use your Google Doc and have the images in there in order top to bottom, like you’ve been
3
seeing me make my script. I’m a bit more of a visual person. So I use Photoshop, but you could use
4
Canva, Pixlr, any of the other tools. And all I want to do is make a really long, wide image that I
5
can scroll through.
6
So I go for something like 9000 wide by 1800 high pixels, 600 resolution doesn’t really matter. But
7
if you want to follow along, that’s how you do it. I create that. And then what I’m going to do now
8
for this tutorial is, is I’m going to just speed up what it is that I’m doing. And I’m going to talk
9
you through it, how I’m displaying this. And if things are important, you can see how I do it. There
10
are so many different ways to do this. But this is the one that I like best that I’m going to show
11
you for this course.
12
So I’ve just placed out my storyboard here, how I like to do this, I’m going to annotate this in a
13
moment. But just to show you, so we’ve got our establishing shots of both the places, then we
14
establish both the girls from behind drawing, and we see out the window here, the naval ships, the town.
15
Okay, then we see the girls front on, they wouldn’t be looking at camera, they’d be drawing. But for
16
the sake of this part for the storyboard, that’s fine. We see them drawing, drawing, their fathers
17
come in obviously naval, and he’s in like a work, a work normal shirt, trousers, then there’s going
18
to be some time pass, obviously, I’ll show that in the story, they look out the window, see these
19
explosions, horror, shock on face, that’s the scene.
20
Now what I’ve done here, see I have a little bit of a gap here is because I move these around and I
21
start annotating whether there should be other scenes between these just note down. So I’m going to
22
do that now and show you what it’s like when I come back.
23
So what I’ve done here is annotated my storyboard. This is what mine always look like. And then I’ve
24
added in where there’s missing shots. So I do things like his fade in, the music starts, I put
25
titles in our fonts, we’ve been using 1940s, show the location, the date, like this might say, Pearl Harbor.
26
And the date and this one might say Hiroshima or nearby town or something. So we have that and then
27
this slow movement in on both these shots, they’re going to pan. Then when we come into these,
28
there’s also a slow movement behind the girls, the faint noise of ships and dockyard outside for
29
this shot, faint noise of the town in the background for this shot.
30
So what I’m doing is setting myself up both the movement that I need to animate with AI tools, but
31
also perhaps the noises we call that the diegetic sound that we’ll be creating with sound effects
32
inside AI also. So then we have sounds, there’s another diegetic sound, the sound of coloring
33
pencils, again, slight movement, slight movement.
34
And then as we have this, we need the noise to remind us that as consistency through the story of
35
the father walking in, you can imagine the noise of shoes on a wooden floor, father walking in, so
36
it makes sense for the next shots. Now, what’s missing here is insert shot of Amy looking up,
37
smiles, stands as her father walks in or a lot of the sounds of to get to this shot.
38
And this is where the father’s dialogue to Amy would come in. And it’s exactly the same here, the
39
dialogue to Amy with subtitles in here, then I need to insert the shots here. So Amy needs to then
40
after they’ve spoken, embraced, go back to the table to color, look out the window, smile, see the
41
outside scene for later, we’re going to see the outside have explosion soon. So let’s set that up
42
and see it peaceful.
43
Show that Amy is not alone. Also, I realized there’s continuity here has the father just left the
44
child alone in the house. So maybe sometime in the father’s dialogue that needs to be your mother’s
45
home, be good for your mother or something. Fade and show the shot finishes so that time is passing,
46
perhaps, and the noise can fade out and the song can change. Also, that’s something I probably want
47
to put right here.
48
Let’s put that as a note right there. Song change. I’m just gonna put that right about here. Okay,
49
and then so then we come back to the scene, perhaps back to Amy’s face, there’s something wrong music change.
50
So I don’t need that there. I do say here, but just to give myself another visual like this to to
51
notify me drama, Amy’s face as she looks to the window, something is wrong, walk over to the window,
52
walk over to the window, same thing with this Amy, she needs to also see there’s something wrong.
53
We see the music intensifies 1940s, you know, does that orchestral style intensify music and be the
54
high pitch and things, then you’re going to have the sounds of explosions and planes flying. Then we
55
show through the shock, Amy’s face, Amy’s face here, the drama, the lights up their face with the explosions.
56
Then we need kind of an ending shot, something to do this. So that’s obviously terrible. Perhaps we
57
need an ending scene. So perhaps earlier in the dialogue earlier in the dialogue, her father says to
58
draw a picture for him of where they want to go or what they want to do this weekend, and he’ll take
59
her. So the picture drops to the floor, showing it will never happen.
60
Now, something like that. So these are what my storyboards look like. I like them long and visual.
61
And I scroll across two lines maximum. You could have this all in one line, obviously. And then I
62
can start seeing what’s missing. So now when we come to the next section, I’m going to just be
63
making all these shots. Perhaps I want to rework these or whatever.
64
We’re making our still images. I know the exact shots I need to fill this in with. And also when we
65
come to do sound effects at the end, I know what diegetic sounds I need for colouring in footsteps
66
and stuff like that. So this is what my storyboard looks like. You could display this however you
67
want. You could do it in your Google Doc where you’ve been putting your script so it’s all in one
68
document and just write these notes out however it is you want to do it. You do it or you storyboard
69
AI like I showed you in the last lecture.
70
So this is mine. Now I’m in a very good place when I come to the next section when we’re going to be
71
generating. I’m going to teach you how to use AI image tools. I know exactly the few shots that I’m
72
missing here, here and here to have a complete story that we can then go into animate and we’ll
73
animate these in the section after that. So let’s get going. Now I’ve got a task for you.
— Task: Create an AI Video Storyboard —
1
So, that was the end of the storyboard section. I hope you enjoyed that, especially the last
2
lecture where I went through step-by-step making mine and you can see what you need
3
here to make pretty much the skeleton, the first part of your production using this method
4
that I suggest for AI video. So, of course, end of the section, I have a task for you.
5
Let’s bring up the slide. So, what I want you to do is plan out your key shots with
6
ChatGPT just like you saw me doing, or you can use another AI tool, whichever you prefer,
7
but ideally the one that’s been generating your script and aiding you could probably
8
help best just to work out what’s needed, how many shots to make a complete storyboard
9
to tell the story. Then I want you to generate these storyboard images. If you’re still waiting
10
to get a bit more knowledge about mid-journey, then, of course, that’s the next section coming
11
right up and you can take a lesson or two and see how to do that and then come back
12
to this. Completely up to you, but make those shots, the foundational shots. You may struggle
13
to get the styling correct at first. That was in the last section. Please go back. If
14
you skip that, you’ll find it very hard to get the styling right, but really try to so
15
there’s consistency between the shots. If these skeleton shots, this framework shots,
16
all the stylings right, all the rest will make sense and follow along, I promise you.
17
Build a visual flow, put these out however you best want to display them and save, of
18
course, and organize this. We’re now starting the production. You’re going to get a lot
19
of shots. You’re going to have a lot of images, a lot of downloads. Let’s start organizing
20
this properly. So by the end of this, you’re going to have pretty much the framework of
21
your video. You’ve nearly created your video in stills. That’s what we’re going to do in
22
the next section. And then we get to anime. So go ahead and do that. Create your skeleton
23
outline framework. I can’t wait to see what it is you’ve been creating. You’ll find this
24
aids you so much in the next section. If you skip this section, next is going to be
25
very difficult and you’re going to find a lack of consistency. Let’s actually go into
26
the next section now, and I’m going to teach you how to use these tools, these AI tools
27
to create amazing images.
— Top AI Image Tools: A Complete Overview —
1
And this section now on AI image generation, the most important alongside the next section
2
with regards making AI video. You’ve already seen me create lots of images over the previous
3
few sections, and you probably wondered, what’s he doing? How’s he doing that? I haven’t gone
4
into depth. That’s what this section is. I’m going to go in depth with lots of different
5
tools that I use. There are primarily two to three tools that I use. And at the end
6
of this section, you’ll see me actually start doing more of our course project we’ve been
7
working on and some of the issues I come up against, how I have to use generative fill
8
to get in some parts of images I want and in painting and all things like this. You
9
can see that as well as some prompting guides and styles and things camera types I talk
10
about. But I go over if I put up this slide right here, I go over many tools predominantly
11
here mid journey, Dali, Gemini, stable diffusion, Dream Studio, Adobe Firefly, really, really
12
good runway image, and then some social media ones, which are way more for the casual user.
13
That’s meta, which I think is going to grow a lot. And so is XAI grok. That’s for X or
14
formerly Twitter. We go over all these tools and I will keep adding to this section as
15
more image tools come out that I think are great. Now you have access as a student. Go
16
over to our page AI video dot school forward slash AI dash image dash generation. And that
17
is the page that I’ve got all the details on here. There’s some information at the top
18
actually about AI image generation. If you scroll down, there’s a drop down menu for
19
all of these tools. And it explains a little bit about them. And also some key things you’ll
20
need to know regards how to use some syntaxes in some or best practices. And then I go through
21
these in depth in video one tool at a time. And then we concentrate. I like to use mid
22
journey a little bit of Photoshop and Firefly also. But mid journey is my main go to tool.
23
And there are a few advanced lessons on that after I go through all of these tools as an
24
overview. So you’ll be able to by the end of this section know what all the main tools
25
are for creating images, the strengths and perhaps the limitations of all of these tools,
26
how to use them. And then you can decide what it is that you want to use. Perhaps budget
27
is an issue or user interface or what it is you’re using them for for your product. You’ll
28
be able to choose from all the best AI image generation tools out there on the market.
29
My favorite is mid journey and a little bit of Adobe Photoshop and Firefly. Yours might
30
be something completely different. Please let me know. I’m excited to learn. OK, let’s
31
get into this. Let’s get into the section.
— Midjourney : Start Here – Introduction & Access —
1
So Mid Journey.
2
Mid Journey is by far my favorite image
3
creation tool.
4
It’s also one of probably the major, if
5
not the major, image creation tools out there.
6
You might be using something with Adobe product
7
or something else and I get to more
8
tools later but I really go into depth
9
here with Mid Journey because I think it
10
offers the most versatility, not only with image
11
but now video also.
12
But I use it primarily for image creation
13
because I think it has the best results
14
and the easiest way to edit and change
15
these to get exactly what you want as
16
well as using references for this.
17
So over the next four or five lectures
18
or so we’ll go over Mid Journey.
19
I’ll break this down step by step exactly
20
how to use it.
21
First off here we need to go to
22
midjourney.com.
23
Now if you are used to, if you’ve
24
been using Mid Journey before, it used to
25
be primarily on Discord and it still is.
26
So you can access this via Discord if
27
you’re familiar with that.
28
If not, doesn’t matter.
29
Mid Journey now have it on browser and
30
have done for about a year and a
31
half or so now.
32
So you can go to midjourney.com.
33
Just go to midjourney.com and you’ll come
34
to this and it looks something like this.
35
Now you need to sign up for this
36
so let me just show you the subscriptions.
37
Let’s go to manage subscriptions here.
38
Obviously where you’re located if you don’t see
39
dollars you might see a different price or
40
you might still see dollars and just get
41
charged in your local currency.
42
There’s yearly billing or monthly billing.
43
I tend to do this but it’s completely
44
up to you.
45
I pretty much go for right now I’m
46
on the basic plan $10 and what I
47
do is I subscribe and then I unsubscribe
48
instantly.
49
Just false habit with me.
50
I’ve got so many access to so many
51
AI tools that I’ll lose track of all
52
my subscriptions if I don’t do that.
53
So I’m just pretty much paying for the
54
month, have full access and then it will
55
cancel and if I want to I can
56
just start it again after that month.
57
But you do get a slight discount with
58
yearly billing.
59
There used to be a free trial with
60
this.
61
I believe in all locations that’s not available
62
anymore.
63
It may be for you, let me know
64
if that’s the case.
65
But you’re able to go on to for
66
$10 trial this for a month and it
67
has up to 200 images for that and
68
all this other stuff here.
69
You can do three concurrent fast images and
70
stuff like this and then obviously the higher
71
your plans go the more you get.
72
The main thing you probably want to think
73
of here is either how many images you’re
74
doing.
75
If you’re making more than 200 you might
76
not want this plan.
77
And fast hours.
78
So every time you’re generating, say it takes
79
those 10 seconds to get an image, 20
80
seconds, they all tally up and you get
81
15 fast hours of generation.
82
So they do it quickly.
83
Once those are finished it’ll generate them slower.
84
So here’s the main difference with the plan.
85
So you can go and check those out
86
and decide what’s best for you.
87
So you’ll probably come to this explore tab
88
when you first come on to this page.
89
I’ll just go over the page right now.
90
Now we’ve done signing up and then we
91
get into creating in a moment.
92
So you’re familiar when you come here you’ve
93
got explore which is pretty much just explore
94
either images I’ve got here, video, styles.
95
This page may change.
96
Mid-journey doesn’t change too much which is
97
good.
98
I’ve previously recorded this whole section and then
99
done updates and updates throughout the year and
100
now I’m re-recording it so it’s all
101
in one place.
102
If there’s an update I’ll add it before
103
this lecture or maybe after this lecture and
104
you’ll see it.
105
You can go to the end of the
106
course.
107
I’ve got old lectures in there if you’re
108
on an old version of this which you’re
109
probably not and shouldn’t be.
110
You can go and check those out.
111
The layout may change slightly but not very
112
much.
113
On the explore page yeah videos, images or
114
styles and you can look at things via
115
certain styles if you want.
116
Anime, this illustration, realism and things like that.
117
Images you can scroll through and videos you
118
can do the same thing now that mid
119
-journey has video.
120
So the good thing about this explore page
121
is if I go along and I say
122
wow I really like this image here let’s
123
click that I can see the actual prompt
124
that you’re using.
125
We’ve gone into prompts quite a lot and
126
you can go to the page for prompting
127
and you can go to the image generation
128
page for this section.
129
You can get that it’s on your welcome
130
email when you came into the course or
131
either in the first section there’s that email
132
called links that lecture called links.
133
You can go there and it details the
134
image generation link for you to access.
135
It’s kind of a written down step-by
136
-step version of this for prompting and things.
137
But here you can see someone’s prompting so
138
I can literally copy that if I wanted
139
to.
140
I could say use text and it just
141
populates inside here which is where you’re going
142
to start generating and you can start generating
143
other images that you like if you want
144
to.
145
So that’s a good way for you to
146
start learning how to prompt.
147
There are other ways as well as this
148
course I’ll get to that in a minute.
149
So that’s the explore tab there’s the create
150
this is where we’re going to actually create
151
this is the main tab if you like
152
it’s where we’re going to create our images
153
and I’m going to show you how to
154
do that shortly.
155
Edit is where you can edit you can
156
either drag it from a URL or you
157
can upload an image sometimes when you’re here
158
and you’re editing it will also open in
159
this tab or sometimes inside here.
160
Mood boards you can create your own mood
161
boards which we didn’t cover using this in
162
the mood board section but it is a
163
great one if you’re using mid journey primarily
164
you can start creating using that.
165
Organize pretty much lists out all your generations
166
over time so you can scroll back through
167
if you’ve been using this for a while
168
and need to get to an old image
169
then you can do that.
170
Chat is pretty much when online if you
171
go this is like part of the discord
172
you can go through and see what people
173
are talking about what they’re saying issues they’re
174
having or ideas or people share what they’re
175
doing and you can have a look and
176
then tasks this is where you can go
177
through and do things like rank images or
178
do surveys if you start doing this it
179
will actually help mid journey know what style
180
of images you want I tend to not
181
do it because I don’t want them to
182
have any kind of preference I trust my
183
prompting skills I don’t want them to start
184
realizing I love really dark imagery and then
185
never do a really bright image again because
186
I’ve told it that I trust my prompting
187
so I pretty much ignore this and don’t
188
do any of that here’s pretty obvious this
189
is where you get all your account access
190
to things and then here’s the help and
191
updates and stuff so updates will tell you
192
what you’re on what’s happened for updating and
193
also what version you’re on if I go
194
to create we do this in the next
195
section I know what version I’m on right
196
here I’m on version 7 if you go
197
to an older version that’s some of the
198
older lectures that I hadn’t of course in
199
the last section but we’ll stick to 7
200
and if we update I will update that
201
so that’s a quick overview of mid journey
202
you’re probably going to come to this page
203
so now you know where you are in
204
the layouts we’ve got to get into creating
205
that we need to know how to create
206
an image and all the wonderful amazing things
207
we can do in settings and also when
208
we come to edit an image and things
209
like that loads of great stuff to be
210
doing so let’s get into that in the
211
next lecture let’s talk about the settings and
212
set up and then let’s create some images
— MidJourney : Setup and Settings —
1
So now we’re in the Create tab on
2
Mid Journey.
3
Step by step I need to show you
4
how to set this up.
5
You see I’ve got this size, shape, image,
6
this 69 landscape and all that stuff.
7
That is up here.
8
This is your bar where you’re going to
9
prompt.
10
You’re going to say I want a picture
11
of a man, whatever you’re going to say.
12
And you can also add images and things.
13
I’ll get to that in a moment.
14
First off are settings.
15
So step by step 1, 2, 3, 4
16
is a quite simple.
17
The first one here is image size.
18
And you can reset these anytime here, by
19
the way, and it goes to square.
20
But it’s got lots of options here for
21
5, 6, 4, 3, 2, 9, 16.
22
And you can see the orientation that it’s
23
having here.
24
I can click along and get that.
25
Obviously, if you’re doing something for like YouTube
26
or regular video, 69, Facebook, Instagram, maybe you
27
want 1, 1, maybe you want vertical if
28
you’re doing social media posts, TikToks or shorts
29
or something.
30
And you can also, rather than have to
31
know what these are, just go square landscape
32
portrait.
33
Although that’s 3, 4, I would go to
34
9, 16 if you’re doing it for mobile.
35
But that’s where you do your image size.
36
And now everything you create is going to
37
be in that size.
38
So when you create something, unless you change
39
it.
40
So many times I’ve put it on here
41
and then I’ve wanted an image this size
42
and then gone, oh, no, I’ve got the
43
wrong size.
44
You have to come up here and change
45
it back.
46
OK, so don’t forget to change it back
47
before you start regenerating.
48
Now, aesthetics.
49
I pretty much keep it on these settings
50
or even here.
51
So it is too much.
52
You can actually click the area, the question
53
mark and it’ll tell you.
54
So the stylization influences how strongly mid-journey
55
aesthetic is applied.
56
Low stylization values boost images closely match your
57
prompt and the higher up they start getting
58
a bit wild with it.
59
So we can actually do a couple of
60
examples here.
61
If I go stylization right down here and
62
I say a black dog, a black dog
63
looking at camera photo realism.
64
Let’s just submit that.
65
And then let’s do exactly the same thing
66
again.
67
And let’s go stylization wild up here.
68
And let’s run that and let’s compare these.
69
So here are my first one.
70
Black dog looking at camera photo realism.
71
And it’s definitely this one’s a bit more
72
illustrative, but somewhat photo realistic on this.
73
And with the stylization up, it’s pretty much
74
done all this stuff.
75
Like I did water, which I didn’t say
76
on the dog.
77
The white hairs are quite prominent, less realistic.
78
I think this one’s like a watercolor painting.
79
So it changed it.
80
And it’s not as much as the one
81
I’m about to show you here in variety
82
and weirdness.
83
I like this.
84
Let’s keep weirdness down here, which would have
85
been exactly the same as that.
86
So don’t to regenerate it.
87
Let’s put the weirdness right up to the
88
top right here.
89
And let’s run that again.
90
And let me show you.
91
And I’m going to do the same.
92
If I put weirdness back and put variety
93
and you’ll see what these do.
94
So let’s use that prompt and let’s run
95
that.
96
So this one is weird.
97
And this one is variety, they call it.
98
So weird is generating its own thing.
99
Like they’ve got like a crow here and
100
a strange background.
101
This dog has got this fun stuff behind
102
it.
103
It’s not really concentrated on my prompt so
104
much of photo realism.
105
It’s kind of doing its own thing.
106
Wait for this variety.
107
Not even a dog in somewhere.
108
I guess they are like this is very
109
strange.
110
A oil painting of a black and white
111
dog.
112
I’ve got a bit of a great dog’s
113
face here.
114
And then this variety really spins it out.
115
Why would I ever use these for weirdness
116
and variety?
117
If you’re not looking for something very specific,
118
which I can’t imagine we would be doing
119
on a video course like this.
120
But if you’re looking for inspiration, perhaps you
121
want to use these so you can generate
122
something you can’t even think of.
123
That’s very, very strange.
124
Sometimes those results are really weird.
125
I put in like I want a dog
126
and I’ll get a picture of a ribbon
127
or something like they’re really out there.
128
But that’s aesthetics.
129
I pretty much keep it on these.
130
I trust my prompting.
131
We’re going through prompting.
132
So I want to keep it to adhere
133
to what it is that I prompted for.
134
Now on the model right here, you’ve got
135
standard and raw.
136
I pretty much keep it on raw all
137
the time.
138
If I click that, it will pop up
139
on raw mode.
140
So you gain more control.
141
Standard versus raw.
142
And then a nutshell.
143
Raw mode will keep it raw exactly to
144
your prompt or some stuff.
145
And standard will let it have a little
146
bit of a license to add its own
147
thing.
148
Now, the reason it might sound funny that
149
I quite like to use standard.
150
It’s not the same as this weirdness where
151
it’s changing things.
152
But when you’re prompting, you’re prompting with our
153
own prompt structure that we’re using, which is
154
pretty good.
155
But also Mid Journey will have its own
156
thing it’s looking for and it’s prompting its
157
own structure.
158
And it almost changes internally.
159
You never see it.
160
Your prompt that you’re prompting for to be
161
perfect for Mid Journey to probably get what
162
you’re looking for the most.
163
You can you can pretty much change like
164
you won’t see much on that prompt.
165
I just did there any difference in these.
166
But standard almost corrects your prompt for Mid
167
Journey and you probably get what you want
168
and raw doesn’t.
169
But if you’re prompting is good enough, you’ll
170
get what you want anyway.
171
This won’t make a huge difference.
172
Not like these, whatever you leave that on
173
there.
174
And then this is your version right here.
175
Keep it on seven or whatever the latest
176
one is for that.
177
Now, depending on the script description that you
178
have, we spoke about that in the last
179
lecture.
180
You might want to have fast or turbo
181
hours.
182
This is how quickly you’re generating images.
183
Obviously, you’re going to use them up more
184
if you have turbo on and things.
185
And you saw how quickly it generated on
186
fast anyway.
187
Not an issue, I think.
188
And the resolution, standard definition and high definition.
189
But only on mega plans can you get
190
this.
191
And then the video batch size.
192
When you’re creating videos, how many do you
193
want to create?
194
Obviously, you’re going to be using more time
195
up, more hours if you’re generating four at
196
a time.
197
So I pretty much keep it on two
198
because I want to have at least two
199
varieties of my video.
200
We’ll get to that later.
201
You could have four or one.
202
Completely up to you.
203
So those are the settings for when you’re
204
coming to create it.
205
So you need to get those down.
206
Check out.
207
I mean, you can copy mine if you
208
want.
209
If you’re doing anything, you might want to
210
change depending what you’re creating for.
211
But if you’re creating video like I am,
212
have a look at these settings.
213
I set it up exactly like this.
214
And now we can get to the next
215
stage and we can start creating some images.
216
Let’s do that now.
— MidJourney : Advanced Prompting —
1
So now before we get into the next lecture where I’m going to show you some specifics
2
of editing in painting, some real detailed stuff inside mid journey here, I want to just
3
make sure that you know some things regarding prompting, not the basics of prompting, we’re
4
going to get a little bit in depth here, understanding some syntax and parameters to repeat things
5
and stuff just so that it makes it a lot easier for you and also some shot styles because
6
some of you obviously depending on how much you know about filmmaking then you may not
7
understand that oh I wanted that as a low angle shot or an establishing shot so we’ll
8
go over a few things inside here and I’ll bring up some slides now.
9
So you have access to this, you can download it from here and you can check all this out
10
and I have on here the ideal mid journey prompt which would be in these five points here,
11
put in the type of image and emotion almost, the composition, then the character image
12
description, style and then some extras and I’ve got an example here so a cinematic image
13
that’s the type of image that it is, moody, gritty, okay that’s the emotion that it is
14
and I’m saying ultra realistic and then the composition here point two is wide establishing
15
shot of number three here character of a man, he’s aged 50 and he’s in a red leather jacket
16
with long black hair, he’s looking into the distance, the style is a classic western style,
17
extras if you wanted him to be holding a gun or something like that or in the background
18
have a herd of horses and things then that’s where you’d put in the extra details, do remember
19
though not too much extraneous detail because the prompting works best if you can get it
20
as neat as possible, if you start putting lots of extraneous stuff then the prompt might
21
start concentrating on those things as opposed to what you really want, also avoid conflicted
22
information we’ve seen this a few times as we’ve gone through some of these models if
23
I say gritty ultra realistic and then I also say animated cartoon style happy bright then
24
the model is gonna get confused and you’re gonna get some funny results, turn your settings
25
to be raw like we’ve shown you in the last one just to make sure it’s adhering to this
26
prompt otherwise it might be doing something completely different in that you give it
27
allowance to start doing a little bit different to your prompt and that’s not what we want when
28
we’re trying to create specific images and you can research the style first, we have the style page
29
to make sure that you’re doing the right style, if I want this in a western style, if I want this
30
in a cyberpunk, if I want this in blue lit hue kind of thing, know the styles and give that
31
information now I want to show you some parameters and syntaxes these are things you can type into
32
now when we used to have the discord version only of mid journey then you’d have to use a lot of
33
these even if you wanted it 16.9 you’d have to type in dash dash a r space 16.9 now on the desktop
34
version it’s super easy as you’ve seen you can just go to this and you can choose what it is in
35
your settings like we did in the last lecture where we’re setting up I wanted a 16.9 now there
36
are still some things here because mid journey can do so much that you will want a syntax for
37
they may be added to in the future inside the bars there that you can just slide or toggle on or off
38
but for now I think these are the main ones that you’re going to want to use now repeat very handy
39
you’ve seen me use that all you do is type in dash dash r space and then between 1 and 40 and it’ll
40
generate that so if I say dash dash r space 4 because mid journey does 4 images anyway you’re
41
going to get 4 times 4 you’re going to get 16 images that are going to be brought up now image
42
weight is also quite good when you we click that and you’ll see us do this a lot in the next lecture
43
when we click this symbol it won’t mean anything until I’ve shown you how to use it but bear it in
44
mind for reference then I can choose an image weight so if I upload an image that I like and
45
then I do the image weight of zero it’s pulling on that image weight less than if I use three
46
we’re going to be doing that in the next lecture the same with star weight when I
47
attach and have an image and I click the style weight is the style of that image between zero
48
and a thousand if it was a moody dark cyberpunk image and I put it on a thousand it’s going to
49
draw on that a lot for the style for that specific image uploaded as opposed to if I did zero now
50
negative prompts are handy it doesn’t always do it first generation and you might have to do it a few
51
times but dash dash no for example that image I was talking about in the last slide and I didn’t
52
want any horses in my image from that guy which mid journey may put on there I’ll do dash dash no
53
horses for example now prompt weight that’s how much we don’t really need to do it because we can
54
assign the syntax here dot dot and we can say for example two one for certain things I don’t use this
55
a lot but some people do the syntax mid journey sets the prompt weight allowing you to assign
56
relative importance to different parts of the prompt by separating them with dot dot followed
57
by the weight so it is handy for example if I have a sky to forest one it’ll give the sky twice as
58
much influence if I want a man and his horse or a woman and her cat or a child and their dog whatever
59
it is I could have dot dot two after cat and I could have dot dot one after woman child man
60
whatever just to make sure that in the prompt and this is handy especially when you’ve got
61
conflicting stuff to give it just a little bit more prompt weight towards that very very handy
62
now quality we don’t need to use it too much because we can upscale and do this but you can
63
set quality dash dash 0.5 to 2 and it does adjust the quality of the image generation higher values
64
producing more detailed and refined results but it will take a little bit more time to process
65
and depending on the package that you have like we’ve discussed so I think these six are the
66
parameters and the syntaxes that you’ll still use but you don’t you might not be using any of them
67
you probably will be using these three over on this side especially we need to know a little
68
bit more now it’s going to move on to shot types because you’re going to be asking mid-journey hey
69
create me a shot about something and if you didn’t go to film school you didn’t study this any kind
70
of media communications course then some of these might be completely new terms to you some of you
71
won’t be and you can just skip forward in this lecture I’m going to talk about lenses and f-stops
72
and things like that in a moment but just to let you know I’ve got two slides here some different
73
shot types which you probably want to bear in mind an establishing shot or some people call it
74
a wide shot but there are two different things an establishing shot something like this is an
75
establishing shot of New York City or a similar city there’s a wide shot which is great to set
76
up a place and then a medium shot close up extreme close up and over the shoulder I’ll go through
77
these for example if I was creating a scene I might start with my establishing shot now people
78
know that I’m in New York City then I’ll zoom into a wide shot oh we’re following this blonde
79
woman as she crosses the street okay now I know exactly where I am in New York and we’re following
80
this person on the street then I might go to a medium shot maybe she does something pull something
81
out of her pocket grab something perhaps it’s really intense and it’s a gun she’s pulled out
82
and I’ll cut to a close-up of her face and then it gets really really intense oh my god she’s going
83
to shoot someone or something extreme close up of her eyes of the gun and that’s how you can start
84
creating a story through different shots but if she suddenly has a conversation with someone
85
someone else comes up after this and says hey what did it what is he doing why have you got the gun
86
out what’s happening an over the shoulder shot is really handy for that because now we know this
87
woman is talking to this woman and then you’d reverse that and have the shoulder of this woman
88
over here opposite side on the left hand side of the screen and this woman’s face on the right
89
really handy over the shoulder shot and these are what you can use inside your prompts a few
90
other shot types to be aware of let’s do a low angle shot this is an extremely low angle shot
91
but you could just be slightly low and this is also very low very high angle shot this goes
92
mid journey I think I mentioned this when we did the the adobe firefly lesson a few lessons ago
93
it kind of does the extremes whereas when you generate your images through firefly it’s
94
definitely more subtle but you can just keep working and working these now a low angle shot
95
gives someone power and dominance this person is dominating the city they’re a powerful character
96
you look down on someone it kind of does the exact opposite you’re looking down on them it gives them
97
a lot less power some other ones you might want like an establishing shot or if you’re opening
98
a scene you might want a drone or perhaps you might call it a bird’s eye view almost straight
99
down or sometimes at a bit of an angle over the city and these are really nice with movement going
100
over the top and then some other ones you might want to know fish eye perhaps someone’s looking
101
through or about to look right up into camera they’re looking through the peephole in a door
102
or just to give that illusion of something’s a little bit off and weird perhaps she’s in your
103
story having a bit of a mental breakdown or something and this is really nice to kind of
104
connote that inside an image it’s all about the connotations that we’re pulling with an image here
105
power you understand that without even knowing it you subconsciously understand that you
106
supposedly understand there’s something wrong with this image it’s fisheye it’s a little bit
107
distorted and strange now the interview or asymmetrical shot is great if you were doing
108
people talking to camera that’s often put them over to the right or the left looking at camera
109
as opposed to straight in the middle and opposite to that is a symmetrical shot this is a little bit
110
Wes Anderson and strange and quirky with the lighting and the symmetrical lights but these
111
are kind of the opposite of each other now the last thing before we go into creating a lot of
112
this and start using mid journey to create some images and then I’ll go and create our courses I
113
want to talk about lens types some people don’t use this in their prompting at all I often don’t
114
and some people love them and every single prompt has this I won’t go massively into debt with it
115
you don’t need to go huge understanding of all of this like we’re training at a film school and
116
and playing with the lenses but just just to get yourself an understanding somewhat you could say
117
out of focus tight shot wide shot you could be saying this kind of thing but you can also use
118
the millimeter of the lens now pretty much a wider shot the tighter shot we’ve got 10 mil
119
which would be a kind of a wide shot outside probably even slightly wider than this
120
all the way up to 200 you can go way up higher than that where we’ve got a really close shot
121
right here so you could say the kind of lens that you want you’ll often find that perhaps also a
122
wider lens depending on the f-stop mention that in a minute has more in focus or less in focus
123
as opposed to this so they’re the lens types that you’ve got here in millimeters so you might say
124
and your prompt shot on a 35 mil and it’s a medium shot and you may also say here’s the f-stop
125
don’t get confused or worried about this quite simple f-stop from say 20 probably not going to
126
want any less than that to 1.8 the bigger this number and the smaller this number smaller the
127
more shallow depth of field that’s what this means so more blurry background the smaller number the
128
bigger the number then the more that’s in focus in the background what you have to do is remember
129
that f-stop is pretty much imagine a circle inside the camera lens letting light in that’s actually
130
what’s happening and the bigger the number the bigger that circle the more light that comes in
131
the more it can get in focus all the way to the back right here all the way to the front
132
the smaller the number at 1.8 then the more it’s going to be a more shallow depth of field
133
and blur this which is great if you want to give concentration to a character as opposed to the
134
background whereas the other shot with an f20 you’re obviously showing this person is on this
135
street and you want the character to see the street and where they are whereas the 1.8 they
136
could be anywhere and the concentration is definitely given to the character’s face so that
137
was just some prompting and some things you need to know before we go into the next lecture where
138
we’re going to start really editing these images down really start playing with them using images
139
for style weight and things like that and some in painting and editing and things
140
let’s go over and do that now in mid-journey
— MidJourney : Creator Actions and Editing —
1
So now we need to create some images
2
so I can show you some of the
3
creator actions and editing tools for this and
4
then we can start getting some references so
5
you can start putting specific people or objects
6
into images and stuff like that.
7
Let’s start from the beginning.
8
So I need to prompt for an image.
9
Now we’ve done prompting before, you know all
10
about that.
11
If I come over to AI video school
12
slash AI image generation which you have access
13
to.
14
You can get these links from the first
15
section of the course or they were emailed
16
to you on your welcome email.
17
So you can scroll down to mid journey
18
and here it’s got some great stuff about
19
what prompting is needed, subject, detail, art style,
20
composition and camera parameters, anything like this.
21
And I’ve got some example right here.
22
So I guess I’ll just take one of
23
these.
24
You don’t need to worry about this stuff
25
like AR.
26
That’s if you’re using Discord.
27
You don’t really need that on here.
28
So I’m going to just take this right
29
here.
30
Peaceful sunset over a mountain lake, warm colors,
31
realistic detail, high resolution.
32
Let’s paste that into here and let’s run
33
that.
34
Once again my settings were all set up
35
for the size and everything that I wanted.
36
So let’s hit that.
37
I’m also going to do something you’ve seen
38
me do this and check it out in
39
our very first lecture.
40
Let me get an image that I like
41
the look of right here.
42
Let’s scroll and find something.
43
Oh this is a cool image.
44
A white model, 90s room, vintage TV.
45
Okay let’s use text.
46
Just click that and I’ve got my settings
47
for here.
48
So although this one is square this is
49
going to produce it 16.9. Let’s hit
50
run for that.
51
Now that doesn’t take you away from the
52
explore page.
53
Click back over to create and I can
54
see here’s my image for my prompt.
55
A peaceful sunset over a mountain lake, warm
56
colors, realistic detail, high resolution.
57
Great and then here’s the model that’s starting
58
to be produced here.
59
White model in the 90s in a room,
60
high vintage TV, VHS recorder on the ground,
61
edgy, realistic, 19 aesthetic, ashtray on the ground,
62
grunge.
63
Yeah you definitely get that these 90s Kate
64
Moss vibes on there.
65
Yeah really cool eh?
66
Like this.
67
Okay so now you have these images we
68
can play with both of these.
69
Let’s play with a person that’s probably easier
70
to show you some bits here.
71
So if I click on that over here
72
you’ve got something called creator actions.
73
By the way that was me scrolling up
74
and down on my mouse.
75
You can see them moving right here.
76
Here’s your images if you want to rather
77
than close this and go back.
78
So we have creator actions.
79
Now you can ask it to vary this.
80
So I’ll go through these top to bottom.
81
So I want it again but subtle changes
82
and then strong changes and I can see
83
them happening right here and I’ll show you
84
that.
85
I can also upscale this subtle or creative.
86
Upscale means you’re going to change it to
87
so this is okay quality but it’ll make
88
it better quality which you might want to
89
do if you’re using the image for any
90
kind of print or you want to turn
91
it into a video so you want the
92
best quality possible.
93
It has to say subtle for these but
94
really it means exact.
95
There may be like a spec different or
96
something and creative means they’ll create it.
97
They’ll do something quite creative.
98
It’ll be quite different.
99
For example we’re on our image right now.
100
I’ll close this it’ll be easier to show
101
you.
102
We said okay can you vary this very
103
subtle.
104
So our first image right here is pretty
105
much similar.
106
You see it’s changing the cabinet, changing the
107
cabinet, no cigarette, changing the cabinet and now
108
I said change this creative.
109
Now it’s changing the whole room layout, the
110
camera angle, where the woman’s looking and things
111
like this.
112
That’s the difference between doing that.
113
So let’s head back more so you can
114
just rerun this as it is but I
115
could just obviously use text and rerun that
116
if I want to.
117
Let’s remove that and then I could also
118
edit this.
119
So let’s go to edit.
120
I click edit boom and it comes up
121
with what do you want to do for
122
this.
123
Do I want to scale it like I
124
can make it smaller.
125
If I do this let’s just do that
126
shall we and I just hit enter.
127
You see it popped up zero of one
128
creating.
129
It’s going to create and if I have
130
it on the edge here it’s creating the
131
different background of this.
132
Okay so let’s go back to our creative
133
right here.
134
You see it was very small like this
135
and it’s made the whole rest of her
136
room which is really good because obviously if
137
I’m like oh that shot’s great but I
138
want it slightly wider because I want to
139
zoom in to get to this shot then
140
you can edit that and then it’ll make
141
up the whole rest of the background for
142
you.
143
Let’s just put that back to where it
144
was.
145
Now the one you probably want to use
146
the most is the erase or if you’ve
147
rubbed out too much the restore and I
148
can choose a brush size.
149
If I hover over here like that and
150
this is where you can start doing things
151
like if I don’t want to have a
152
cigarette in her mouth let’s just take that
153
right there.
154
I can do things like remove cigarette and
155
hit enter and I can see zero of
156
one right here.
157
It’s creating or you could be removing the
158
ashtray.
159
I could change a trainer’s lace.
160
I could remove this.
161
Do anything I want to.
162
You could pretty much just erase.
163
I don’t know I could even erase over
164
her pants here and say make these red
165
pants if you wanted to change it.
166
It’s completely up to you if you’ve rubbed
167
out too much and you want to restore
168
it just re-rub back over on there.
169
This is the main reason I love Midjourney
170
because the editing tool is really good.
171
Let’s go back to create.
172
Now here’s the good thing about this is
173
when you’re creating image yes it gets it
174
wrong many many times.
175
So I’ve said remove the cigarette.
176
Do you see the cigarette in her mouth?
177
Of course we do.
178
So when you have to remove this let’s
179
keep going you may have to do more
180
than one prompt.
181
So let’s remove that and I’m just going
182
to try a few different things.
183
Let’s just say remove and now I’m going
184
to say what’s actually in the image.
185
I’m hit that.
186
Let’s just do lips red lipstick and let’s
187
run and see if any of these manage
188
to understand the prompt that I’ve got.
189
I can already see that when I just
190
use the word remove you see I said
191
remove cigarette it was somewhat confusing it because
192
I used the word cigarette so it still
193
included it.
194
Just use the word remove if you’re removing
195
something.
196
Looks really good here very realistic.
197
Yeah really good really good.
198
Here I said woman’s mouth so it’s just
199
got the woman’s mouth and here I said
200
red lipstick and it’s put her mouth there
201
with red lipstick.
202
That’s a pretty good one there.
203
So that’s how you use the edit to
204
remove things which is probably the main reason
205
you’re going to want to use the edit
206
tool.
207
Let’s get back to our original image.
208
Let’s change a new one right here.
209
So if I want to I can use
210
this image style and prompt.
211
I’m going to get into that actually in
212
the next lecture because the whole thing about
213
how you’re using references and stuff and right
214
down here is where we’re going to animate
215
so that means turn it into video.
216
If I want to turn this into a
217
video I can.
218
I can pretty much I can say automatically
219
do this so animate manually so I can
220
say how I want it to animate.
221
I can automatically do it low motion or
222
high motion less movement lots of movement.
223
Also loop it you know people are using
224
shorts and something loops and loops and loops
225
either with low motion or high motion but
226
this I’m going to cover in section 10
227
to keep this all organized.
228
Section 9 now we’re doing images section 10
229
we’ll do video so we’ll cover this in
230
the next section.
231
So that was create an action there’s also
232
some more options right here you might want
233
to remix this you can also go pan
234
or zoom.
235
So just like that I can remix it
236
subtle or strong which is a lot like
237
to vary it.
238
Pan it I can move an image like
239
if I have this one and I want
240
to pan across that way you see it’s
241
creating again up here.
242
I’m also I can zoom it 2.5
243
or 2 or 1.5 let me show
244
you those images now it’s creating.
245
So right now you see it panned it
246
now I’ve got more it’s a wider shot
247
so I can now crop it here and
248
get 16.9 if I wanted to which
249
you could do inside the edit right here
250
if I wanted to edit this and we
251
can change the settings to 16.9 oh
252
that’s 9.16 to 16.9 if I
253
wanted to so I could just drag this
254
right here drag that here now I have
255
got my extra bit I wanted maybe I’ll
256
use that maybe you’re using these images because
257
in the editing software it wouldn’t when you’re
258
making video it wouldn’t go from left to
259
right so I tell it here’s my first
260
image here’s my last image and we slowly
261
pan there’s lots of reasons you could be
262
wanting to use that.
263
Let’s go back to create and here we
264
are yeah zooming it looks nice it’s really
265
like this fisheye kind of lens right really
266
cool.
267
Okay so that was the creator actions and
268
editing which is very important next let’s go
269
on to the bit that I didn’t show
270
you here which is to use image style
271
and prompt which is also up here let’s
272
go into that now on the next lecture.
— Midjourney : Image, Style & Omni Reference (consistent characters) —
1
So now we’re going to start using some
2
image, style and prompt which we’ve also access
3
up here for getting the images you want.
4
Now this is really useful if you want
5
to take an image you already have used
6
in mid journey or an external image perhaps
7
an image of yourself or someone else something
8
else and you want to say I want
9
this style I want this image I want
10
this person inside an image obviously lots of
11
people want to use this if you’re making
12
video of continuity if I was making a
13
video about this woman and I want a
14
different shot of her in a different location
15
then I want her to look the same
16
every time I’m going to use one of
17
the following tools.
18
So if I either click up here and
19
I can say add image I can add
20
images like here’s one of me for example
21
now above it you’ve got starting frame video
22
this is and all you’ve got image style
23
reference omni reference if I just click it
24
it automatically goes into here which is for
25
video so ignore that right now if I
26
take this image of me I can add
27
it into my image prompt here which pretty
28
much say it’s going to take elements of
29
the image and then I’ve got the style
30
which is the style so you see it’s
31
purple background quite white clear here white top
32
it’s that kind of realistic style or omni
33
reference right here is using me this is
34
using as a reference for a person so
35
if I wanted to have me this person
36
in multiple shots I would use omni reference
37
to make sure I’ve got the right person
38
in every single one now let’s do that
39
for a second show actually let me grab
40
me again let’s put me in here and
41
say this man in New York City and
42
let’s take a little look at that also
43
you could do the same thing if you
44
have this image you’ve created inside mid journey
45
I want the same image style I can
46
just click here and now it’s adding the
47
image or I could have clicked I want
48
in the same style or the same person
49
right here I’m sorry style or the same
50
person right here and now I could say
51
this woman in New York City and I
52
can run that one right there or if
53
I wanted I mean I can also go
54
things like style and I want the person
55
right there so if I now go this
56
woman in NYC now it’s going to actually
57
take the style reference and so the type
58
of image it is color stuff and the
59
omni reference in here let’s hit that now
60
I’ve generated quite a few things here let’s
61
take a little look so this time it’s
62
taken that image of me and I’ve said
63
hey put this man in New York City
64
this one and this one look the most
65
like me this one slightly not but you
66
can see how you’re using yourself in here
67
I do talk about face swapping later but
68
omni reference has become so good now and
69
so is reference inside things like runway that
70
you might not need it at all so
71
this is kind of cool I can use
72
me here so now we’ve got that woman
73
that we use in New York City it
74
didn’t do anything here or here but it’s
75
taken the woman and put her right there
76
in New York City yes that looks like
77
the same woman to me pretty good inside
78
New York City now I’ve said do it
79
in the same style now the trouble with
80
style and image is that it may be
81
New York City’s maybe at a window whatever
82
is it takes a lot of reference from
83
the actual image that you’re giving so I
84
would use omni reference if you want the
85
woman and I would describe it for example
86
in your prompt 90s grunge feel if I
87
do that right here and let’s I can
88
click right there and it automatically populates omni
89
reference you see that little symbol right there
90
and like I say this woman in New
91
York City 90s feel to image style grunge
92
and I would prompt for the style I
93
do use style and image sometimes for something
94
but we’ve already been through the styles on
95
how to do a prompt perfectly so I
96
would actually be using your prompting to get
97
that that you want or you could use
98
something like whisk I talk about later where
99
you drop images in and it’ll tell you
100
what the style is but it’s completely up
101
to you so this woman’s now in New
102
York City it’s still putting her inside there
103
with omni reference it will do on some
104
and look it’s taking the TV even for
105
some of these but here is outside in
106
New York City of course I’ve just said
107
inside New York City I could say inside
108
Times Square let’s actually do that let’s put
109
the person in here and let’s say Times
110
Square New York City let’s do outside daytime
111
and let’s hit run for that okay now
112
we’re getting somewhere not these two with the
113
TV ignore that it might be something to
114
do with me having it on standard like
115
we spoke about as opposed to raw or
116
other draft even but I don’t think so
117
so look this woman here this is actually
118
a really nice shot inside Times Square and
119
it’s got a 90s grunge feel it’s got
120
90s cars in the background even by the
121
looks of it really nice and then you
122
could change and clean up this image if
123
you wanted to change anything in here with
124
editing so they’re really important tools that you’re
125
going to want to use here image style
126
and especially omni reference even when I’ve created
127
things with people before like I create a
128
character for a movie and I put them
129
on a green screen and I use them
130
like that and then I can say hey
131
I want this or not that that’s putting
132
them in the hit in omni reference and
133
you can say I want this person doing
134
XYZ in XYZ location it’s really handy so
135
they’re really really handy tools to have so
136
you can create any of the images that
137
you want so now this brings me to
138
the end of the little mid journey section
139
you pretty much have everything that you need
140
how to get to the site the layout
141
for it and then we went into the
142
settings for setting that up your creator actions
143
how to create there’s everything about prompting you
144
can get on here or we had a
145
whole section on prompting of course what’s the
146
perfect prompt for mid journey or any image
147
creation tool really and then we’ve gone into
148
how to edit these and styles and use
149
omni reference so the last thing is obviously
150
going to be we need to turn these
151
into video if you want to but I’ll
152
add those lectures into the next section section
153
10 so you can go off into that
154
to keep it in the right section or
155
continue looking at some other image creation tools
156
here and then I’ll see you over there
157
in section 10 shortly
— Nano Banana (Google): Create & Edit Images with Text Prompts & Image References —
1
Now I’m going to show you Nano Banana. Google has really in the last six months or so really
2
started taking over with regards tools we’d want using AI, especially for the video and
3
image space, but video, especially you’ll see that with VO3 and Nano Banana, another
4
tool that’s been released we can use here inside Gemini is really good for several reasons.
5
So it’s different from other image creation tools that I’ve shown you and you’ll want
6
to use because we use text prompt here to alter the image or change the image as opposed
7
to something like some of the other tools where you might use an editor and an eraser
8
tool and things to tell it where and what to change. All text based prompting. So let’s
9
get here first. The first thing you want to do is log into Gemini. Just search Gemini
10
and log in. You can log in with your Google account or create one and you can do this
11
free and you get quite a lot of allowance free on Nano Banana. You can also, I’ve got
12
the ultra plan here because I use VO3 and it’s all connected. Obviously it’s all Google,
13
but you can get quite a lot for free here, which is something people want to test out
14
a lot before they then upgrade to get faster generations, more generations, etc, etc. So
15
let’s use this because it’s really good. You can text prompt for images and changes and
16
also use images as references really, really well. So when you’re on Gemini or look something
17
like this here, it’s already got it. See this banana symbol here or you can go to tools
18
if you don’t see that and say create image and now you are using Nano Banana. So the
19
first thing we can do is just let’s just simply prompt for an image, shall we? So really simple
20
here. I’m not describing too much. You can go into much more detail if you want to. A
21
man aged 20 at the edge of a waterfall taking a photo with his iPhone 17 of the scenery.
22
I could have obviously described the man. I could have described the scene, the time
23
of day and all this stuff. But for the sake of this, you can see what we’re going to generate
24
here. Okay, that was just seconds that that took to generate that really nice. Let’s take
25
a look at this. We’ve got a guy definitely 20 years holding his phone. I think that looks
26
a little too large here and a drone in here. Also, it’s not entirely really realistic with
27
the style. Pretty good. But let me do this. Okay, let’s go back and we can alter this.
28
Okay, so let’s say first, let’s do something like remove the drone from this image. And
29
then we’ve generated exactly the same thing. I just prompted to remove that drone. So they
30
need to use an eraser tool or anything. I can prompt to remove it. So we can make some
31
other change if you want like change the time of day to sunset. Okay, we’ve got exactly the
32
same image this time as sunset. So we’re having this back and forward conversation. So if I want
33
to do something like make the man sat down on the rock and make the iPhone smaller, because I think
34
it’s slightly too large for his hands there. Let’s do that. Okay, great. So now he sat down. I
35
think that’s maybe that’s actually the right size for like iPhone Max Pro. Let’s take a little look
36
at this. He’s now sat down on the rock in the same scene. And you can see you can go back and
37
forward. So I want to go make the image more realistic. photo realism. Here we are. Let’s
38
take a look at this. Okay, slightly more photo realism. I just got kind of dust, I think on the
39
lens to make it look more realistic. But you can see you can go back and forward with this. And I’m
40
just having a conversation with it going from our very start image, I want to change and remove
41
something, change the weather, change the position, change this, I can also go change the
42
man to a 20 year old female. Okay, and now we’ve just changed her to is pretty much exactly the
43
same clothing. If I go back to the other image here. Yeah, identical, you can say you can just
44
be text prompting, and just change it so good to be able to do that. Now the other thing you can
45
do is you can use actual images in here, we can do lots of things like create yourself or someone
46
you have permission for in various locations, clothing style, or you can put images together.
47
So I’m going to show you all of that. If I just grab an image of me and drop that in here, if you
48
have more than one image, then reference these as image one image two or something. So I’m going to
49
say put me at a desk in a modern New York office, big windows and view behind me. Now I think what
50
you’re going to find here is it literally quite literally takes the image itself, but that’s just
51
part of it. It will take the image itself and put it on there. And then we can tweak that to make it
52
slightly more realistic. So here’s the image that is exactly me at a desk is still got my microphone
53
in here. But you can see that the lighting is not great. It does this at first, right? So let’s go
54
for something like remove the microphone, move back slightly to see me at my desk, change the
55
time of day to mid day and sunny. Let’s hit enter. Okay, so now I have me here at my desk, I’ve
56
removed the microphone, it’s midday, the sunlight is completely different in a modern office. But
57
let me also just go change the color of me to match the scene better. Okay, and now you can see
58
from there very pale didn’t really match and here now the color is matching better. So we can start
59
doing things like that to make it look even more realistic. I can also do things if I wanted to
60
grab another image, for example, if I put in right here, here is the logo for AI video school, I can
61
say, put this logo on my sweater. Okay, so here’s the image right here. You see, I’ve got the logo
62
right there AI vs on my sweater, which is cool. Now let’s also add some things like if we had
63
references for clothing and things like that. So if I take this image of me again, drop that in there.
64
And then an image of this which is a jacket that I want to wear. And I can quickly say make the person
65
in image one where the jacket of the person in image to make it a full body shot. Okay, and here
66
is me that’s definitely me right there wearing the jacket that was in the image and then it made up I
67
didn’t give it any reference. I’m wearing the trainers here and the pants for this. Perfect.
68
Really, really good. Alright, so we can put me in a location right here. If I just go to copy image,
69
and I go to this and go paste, it will just paste in the image right there. So now I can say put me
70
in Piccadilly Circus in London. Okay, let’s go with that. And here is me full body right there. And
71
again, just like the prompt we use for the lighting and maybe better if you wanted to, but this is
72
pretty well lit looks like I’ve got studio lights on me or something. But here is me stood inside
73
Piccadilly Circus. I can say stuff like turn me 30 degrees and have me doing the peace sign with my
74
fingers. Okay, and there’s me yet definitely turn 30 degrees there and I’m doing a peace sign very
75
responsive, really, really nice in the shadows, everything really good. So if I go back to this
76
image right here, perhaps I wanted to put some of us together. So I can go once again, copy image, I
77
can put that in here. And then I can also upload some other people if I wanted to, I can put in this
78
guy right there. And this woman right here. And I can just say put us together in Piccadilly Circus,
79
London, and without giving it much, much prompting for full body or anything like that. Let’s just
80
see what it does. Okay, and here’s all of us right here. The three people, there’s me the guy and the
81
woman and it made us full body shots and made up the rest of what they’re wearing, skirt dress, it
82
got all it had from there on this image where the straps. So it’s made this up here. We’re all
83
smiling and posing inside Piccadilly Circus. Really good. So you can put people together, put clothing
84
you can change the pose or everyone, I could then alter this and whatever I want, add a plane flying
85
over other people, I could prompt for people also, if I didn’t want to use specific images.
86
You can also another reason people might want to do this is to put in an image of multiple objects,
87
perhaps you’re creating a scene or if you’re doing a product video. So I have this image I’ve just
88
made right here. Let me make that right size for you to see. Here’s an image of items that I’ve got.
89
So here is a MacBook Pro laptop, a Rode microphone, a bottle of Bacari sweat drink. You might not be
90
familiar with this unless I think doing have this in Asia. Big in Japan. Here’s a cactus that goes
91
on the desk. Here’s some lays chips. Here’s a lacy drive and a wallet. So what I can do is I can put
92
that image right in there. And I can say create an image of these items on a desk in the middle
93
of the desert, which sounds kind of weird, but I really want to push this to the limit. Let’s see.
94
Let’s see what happens. OK, so here is this didn’t work, did it? Look, it’s floating and putting
95
these like this. So let’s reprompt. I’m going to show you what I do. Create an image that has
96
all of these items organized on top of a desk. Let’s leave the desert out for now. OK, I’ve done
97
the British spelling of organized. So here’s the difference when I put this in here, create an image
98
of these items. It didn’t tell it. It’s literally trying to put these onto a desk at shadow in the
99
same order. This time I’m going to say create an image that has all of these items organized
100
on top of a desk because this is prompt based prompt based design. It means that your language,
101
whether you’re saying on, in, around, organized, on top of, including, et cetera, might make a little
102
bit of difference. You don’t need to go too in depth about how to prompt and things like that.
103
It’s pretty intuitive and you could try it again and again, but you need to be really careful of
104
the language you’re using when it is prompt based image creation and editing. OK, so I’ve just come
105
back because it also put the layout exactly the same. I added this extra sentence, change the
106
items layout to be natural for the desk. Finally, we’ve got something really good here. See, this is
107
angled right there. There’s the microphone. There’s the bottle behind the chips that are in
108
the foreground, and it’s made the more realistic like the original image is slightly animated for
109
this. There’s the lacy drive. I think it’s slightly big, although it’s in the foreground,
110
but you can make that smaller with prompting. Now I’ve got this. I can say change the location
111
of this desk to the middle of the desert and let’s run that. OK, now we’ve got the image here in the
112
desert, so I can now bring this back. I can say move the camera back slightly, see the whole desk,
113
see the edges of the desk and things like that. So that was Nano Banana. You can see now in our
114
whole journey how we get from prompting for an image to changing things, whether style,
115
the gender of the person who’s in here, as well as changing things like putting me logos on me,
116
putting clothing on me in a certain location, changing the angle, adding people and then
117
how to accurately get realistically objects that you place together if you want those inside an
118
image. A really great tool. Go and play with this and then check out the next lectures
119
talking about some other image tools.
— Google WHISK: Effortless AI Image Creation with Whisk (No Prompting Needed) —
1
Now, the next tool I want to show
2
you is WISC by Google, and it’s part
3
of the Google Labs.
4
You can see here labs.google, which is
5
a space.
6
If you don’t know what Google Labs is,
7
you’re going to see me using Flow in
8
the next section when we do video generation.
9
That’s kind of the space, the program that
10
they’ve created to use VEO, which is the
11
program that they use to make video, and
12
WISC uses Imogen, which is for images.
13
So basically, think of Imogen as the tool
14
that Google uses, the name for creating images,
15
and then you can do that inside Gemini,
16
you can do it inside here in WISC,
17
and various other tools, and then VEO was
18
for making videos, and you can do that
19
inside Flow, you can do that inside Gemini,
20
and you can do it on lots of
21
third-party tools.
22
So labs is like a place where there’s
23
different tools, and WISC is one of them
24
for image generation, which is really, really good
25
and interesting.
26
It’s a very different way to generate images
27
that I think is going to help a
28
lot of people because it’s supposed to eradicate,
29
and you’ll see here on the tools, the
30
WISC FAQ, the help center, it’s going to
31
eradicate the need to know how to prompt,
32
and it’s really good at doing that.
33
Basically, there’s an image at the top here.
34
I could add in three things, for example,
35
this subject, this scene, and this style, say
36
it’s anime, this man in this scene, and
37
it can generate a person, that person, in
38
that style, in that scene.
39
So you don’t have to say, I want
40
this man who looks like this in a
41
certain style, like anime, bright, sunny day, and
42
describe the background.
43
You can drop in ingredients, if you like,
44
which is something actually they use the term
45
for in, you’ll see in the next section
46
in VEO for creating video.
47
You can create your image from other images,
48
so you don’t have to know, we’ve obviously
49
studied all different styles and things, you don’t
50
have to know that, you can drop in
51
your images.
52
It also actually generates what the text is
53
for that.
54
If you do want to prompt for it
55
inside something like creating a video somewhere else,
56
or want the text prompt, it also does
57
that.
58
So there’s a really good here, and you
59
want to go to the FAQ page, just
60
explaining in the help center of WISC, but
61
I will show you that.
62
So if you come over, just search WISC
63
Google Labs, and you’ll come up, it’ll be
64
the first tool that’s on here.
65
Let’s go to enter tool, and it will
66
look something like this.
67
Obviously this page may change slightly over time,
68
but I don’t think this one’s going to
69
change too much at all.
70
Just to let you know, here’s add images,
71
and this also brings it out on the
72
side here, I can just click here.
73
Here’s where I’m going to add, remember we
74
set out right here on this page, subject
75
scene style, subject scene style, that’s where you’re
76
going to add those, I can hide them
77
or bring them out with these arrows, just
78
like this.
79
Over here is where you can put in
80
things like your videos you’ve created, likes, and
81
also your library.
82
The library, if you open it, will open
83
up everything across all different.
84
So if I’ve created stuff in WISC before,
85
and I was trying to generate things like
86
this, or if I was inside Flow, where
87
I was making videos, it’s all part of
88
the Google Labs suite.
89
So you don’t really need to know his
90
Discord if you want to go on there,
91
and all about your credits and everything if
92
you want to do video generations.
93
So let’s begin using these.
94
So we’re going to create with you, let’s
95
do lots of different things.
96
I’ve actually got this tab open here, because
97
I’m going to be dragging these from here
98
onto my desktop, as if you would, so
99
we can work out some other things.
100
There’s also something down here, which is quite
101
interesting and fun to play with.
102
Not only is I can put out what
103
size aspect ratio that I want, depending if
104
I’m creating for YouTube, Instagram, TikTok, or perhaps
105
Facebook or something, but also right here is
106
roll the dice.
107
Renaissance Vampire King, Flower Studded Hat, Flared Nostrils,
108
Pink Hue, Soft Gaze, Portrait, Candid, Quarter Turn.
109
And then if I go into here, I
110
can actually see which model I’m using, best
111
quality, quality, and the seed.
112
You don’t need to know too much about
113
that for this.
114
And I can just hit go.
115
OK, and here is the image right here.
116
You can see it’s got everything, Flared Nostrils,
117
although it’s actually given Studded Hat, Flared Nostrils.
118
I’ve got something.
119
If I click on this, that is a
120
bit strange there, isn’t it?
121
What’s happening here?
122
OK, but it’s done it on both of
123
them.
124
Not what I expected.
125
It’s OK, though.
126
And you can just go roll this.
127
OK, a microphoto of colorful of a tiny
128
gnome riding a snail.
129
Yes, I want to see that one.
130
Let’s click and generate that.
131
OK, nice.
132
This is really nice.
133
A gnome riding a snail.
134
I like that a lot.
135
Now you see when I click on the
136
images right here, it’s got, you can edit
137
the details right there.
138
I can actually edit this if I want
139
to via text.
140
I could say a microphoto of a gnome
141
riding a snail, the thick green forest, magical
142
fantasy, the gnome.
143
The gnome has a green color hat, for
144
example, and generate.
145
All right.
146
And you can see it now.
147
The gnome has a green color hat right
148
there.
149
Perfect.
150
Really nice.
151
So that’s how you can edit those things.
152
So but the main way you’re going to
153
want to use this is I’m going to
154
want to be using these tools here for
155
subject, scene and style.
156
I’m making something.
157
Let’s say I’m making a cutout animation style
158
video so I could grab this image, for
159
example.
160
I really like that.
161
But I want a woman who is a
162
witch, for example.
163
Let’s have a look right here.
164
OK, a woman witch.
165
This one’s pretty good, I think.
166
OK, yeah, like that.
167
And the scene is a scary castle.
168
Nice.
169
Let’s go with that.
170
OK, now let’s put this in here.
171
Now, this is going to be interesting because
172
both our subject and our scene are very
173
different from our styles.
174
So I’m just going to drop those all
175
in there.
176
You can see them just on my desktop.
177
Just drag them in.
178
So I’ve got a witch in the scene
179
of a scary castle in the style of
180
this cutout animation.
181
Let’s just remove all of this right here.
182
I’m going to give it any text prompt
183
with it and let’s hit go.
184
Right.
185
Nice.
186
So it’s giving me my base.
187
Really, this is kind of realistic.
188
I’d have to keep playing with this.
189
But this one, I think a little bit
190
more animation.
191
Yeah.
192
So now it’s giving me exactly this scene
193
and this witch, hasn’t it?
194
And it’s trying to do this style, but
195
it can’t tell it’s paper cutout animation.
196
Let me just tell it paper cutout animation
197
style and let’s hit run.
198
And here we are.
199
We’re getting closer, getting nicer.
200
OK, a paper cutout animation style.
201
And then here, for example, is what I
202
want to show you.
203
If I ever wanted to recreate this, say
204
I want to use it on a different
205
platform.
206
Maybe I was using it on a different
207
image generation or inside Gemini or whatever I
208
wanted to do.
209
I can actually copy this now.
210
It’s my entire prompt there.
211
And I could paste that in.
212
It creates the prompt that Google is using
213
for this scene.
214
Also the same if I want to use
215
it in Veo and create video from text
216
prompt.
217
So it’s a really good tool.
218
You actually see me when I teach you
219
how to use Veo 3 in the next
220
section.
221
You’ll see me use Wist quite a lot
222
because I don’t want to bother to type
223
in or you’ll see me use like when
224
there’s a singing yeti doing a song or
225
cutting through glass with a knife for ASMR
226
videos.
227
I just asked Wist to make a prompt
228
for me and it was able to do
229
that.
230
That’s pretty much what you’re going to be
231
using this for.
232
But I would keep keep playing with this.
233
For example, if I copy this right here
234
and then I get rid of all of
235
these.
236
Let’s go about this and let’s just test
237
everything out.
238
And then let me just paste that back
239
in here.
240
It’s in a paper cutout animation.
241
So you can see that I’ve just copied
242
that exactly.
243
Let’s run that and see what Wist comes
244
up with.
245
If I put in their own prompt that
246
they’ve created from this image back into itself
247
without anything for subject scene or animation.
248
Yeah, here we are.
249
I’ve got this right here and got this
250
right here.
251
And you keep playing and playing and playing
252
with this.
253
But using that, you can see how that’d
254
be really helpful to be able to get
255
exactly what you want from there.
256
So let’s keep playing with some things.
257
Let me come up with a few more.
258
So let’s say I like anime.
259
Let’s grab an image in the style of
260
anime.
261
Nice.
262
I’ve got this Studio Ghibli anime image right
263
there.
264
Let’s actually use here’s an image of me.
265
I’ve got on there in anime and what
266
setting do I want?
267
OK, and here’s a cool setting in anime
268
style of Tokyo.
269
Oh, I really like this one.
270
OK, let’s grab that one down there.
271
So let’s put these all together and see
272
what happens.
273
Let’s put it in my subject.
274
My scene is Tokyo and my style is
275
this anime.
276
OK, let’s not give it any text prompts
277
at all.
278
And let’s run that.
279
OK, and here we go.
280
There’s me and I’m at a desk.
281
It was taken in some of that thing
282
for my subject.
283
Oh, which is true.
284
That’s exactly what I want.
285
So the subject image doesn’t just take the
286
subject person takes also anything like if they’re
287
sat at a desk or whatever.
288
I’ve also got learn AI here all in
289
one place.
290
But this one didn’t take the desk.
291
There’s me made me look quite handsome there.
292
That’s really good.
293
Thanks AI for doing that.
294
In the animation style inside this setting of
295
Tokyo.
296
Perfect.
297
That’s exactly all right.
298
Let’s play with another one for style.
299
Now let’s get film noir.
300
Let’s do this.
301
It’s like 1920s 30s.
302
This film noir black and white of all
303
these like Venetian blinds.
304
These lights coming in.
305
Let’s once again, let’s get someone else here.
306
So I just typed in woman here.
307
Let’s get something like, oh, this will be
308
interesting.
309
African woman portrait.
310
Let’s put these two styles that would never
311
go together.
312
Usually, I’ve got subject style and scene.
313
I want to actually go back to that
314
film noir style.
315
Let’s go back to here and let’s find
316
a scene.
317
Yeah, let’s still get the scene of like
318
the Venetian blinds.
319
Okay, let’s put these together right there.
320
Let me erase all of these woman scene.
321
Let’s deliver the style.
322
Also, this film noir.
323
Okay, great.
324
Let’s run those and see what it can
325
do.
326
Oh, nice.
327
So I see what’s happening here.
328
This is good.
329
That is done there.
330
You’ve got this person here, which is definitely
331
theirs.
332
Woman in African dress.
333
In film noir, definitely got the style right
334
there.
335
But because the scene has this man in
336
it, let’s erase that.
337
Let me just put in, oh, we can
338
enter text too if you want.
339
So let’s go a 1920s office black and
340
white blinds.
341
Let’s just leave it at that.
342
Let’s go generate.
343
Okay, yeah, this will do.
344
Let’s do this setting here, 1920s office.
345
I want this person right here in this
346
style and let’s run that.
347
Okay, nice.
348
This is exactly it.
349
Look at this.
350
I’ve got this woman in African dress with
351
the blinds here.
352
Film noir, it’s going across her face.
353
A young adult female depicted high contrast black
354
and white image.
355
Reminiscent of film noir.
356
Exactly that.
357
And that’s how you put things together for
358
using this.
359
So if you don’t have the image that
360
you want, then of course you can enter
361
it or upload the image.
362
And you can do that for all of
363
them right here.
364
If I want to enter in the text
365
to get the style description.
366
But that’s kind of the whole point of
367
this tool, of WISC, is to be able
368
to drop in anything that you found.
369
Instead, just grab these offline.
370
Anything you found to create your own images
371
and version of that.
372
Whether you’re going to use it for this,
373
or if I wanted to, I could just
374
copy this.
375
And if you’re using something else, even if
376
you’re using like mid journey, let’s just paste
377
that exact prompt into mid journey.
378
Or perhaps it’d be better to do it
379
in Gemini right here.
380
I could just paste that into here.
381
I could say image of and just go
382
with that.
383
Nice.
384
Okay, so Gemini has given me this image.
385
Let’s take a look at this.
386
Nice.
387
Exactly what I wanted.
388
Exactly that.
389
It’s not quite as nice, I think, as
390
the image we got from here.
391
But that’s because we gave a specific reference
392
image with a scene and things, didn’t we?
393
And these are the images I’ve got here
394
from mid journey.
395
Beautiful.
396
These are really nice.
397
Really nice.
398
They don’t have them going across her face.
399
The lines from the blinds.
400
I would prompt for that if you wanted
401
to.
402
But you can see how you could either
403
use which to create images that you just
404
want to use straight from here.
405
You can also refine these if I wanted
406
to.
407
And I can add additional details like you
408
could add something on her shoulder, change the
409
earrings at the different by the light coming
410
through someone else in shot, whatever it is
411
to refine it if I want to.
412
But let’s remove that right here.
413
There’s also animate, which is going to use
414
some of VO, which if I can give
415
it some instructions right here, which if I
416
was you, I’d just be using flow.
417
And I talk about it in the next
418
section of the course rather than do it
419
inside of here.
420
But it is an option.
421
But whether you want to generate images in
422
here and then download them right there, or
423
you’re just going to use these to get
424
the prompting to then copy and put it
425
in elsewhere.
426
It’s an amazing tool that I can’t recommend
427
enough.
428
And I really wanted to show you in
429
the image generation section of this course.
430
OK, I’ll see you on the next video
431
shortly.
— DALL·E Overview: Creativity Unleashed —
1
We’re going to look at DALI, which is part of OpenAI, which you’ll probably recognize
2
from ChatGPT, and that’s exactly where we’re going to go into to use this.
3
This is the first of several tools here.
4
We’ve got DALI, Gemini, Stable, as well as Dream Studio, Adobe Firefly.
5
I’m also going to show you a bit of Photoshop with that, and then we’ve got Runway, which
6
we use for video a lot in the next section, Meta, Grok.
7
There’s lots to be shown you, and we’ll go through these overviews of these platforms
8
and any more over the course of the months, years that I think need to be added.
9
These are definitely image generation tools of note, ones you’ll probably want to use
10
and I think will be around for a long time, some free, some paid.
11
Let’s start with DALI, shall we?
12
We’ve already looked at Midjourn and did an overview.
13
To get there, now you can go to OpenAI.com.
14
It’s surprising sometimes how difficult it is to find the actual AI tool for these because
15
some of them, like Stable Diffusion, are used by lots of different platforms, so finding
16
the actual site is sometimes difficult, but on the page, AI Image Generation, the page
17
we’re going to be using for this section, if you scroll down underneath all of these
18
at the bottom, there is a working link to every single one just to make it easier for you.
19
Some of these lectures are going to be shorter, some longer, some there’s not a lot to show
20
you, some it goes quite in-depth.
21
So DALI, let’s click Open in ChatGPT, and it opens here.
22
I think the best way to show you this is I’ll give you a quick overview of what it looks
23
like exactly like ChatGPT, except you can upload image here, which you probably wouldn’t
24
be using unless you were asking ChatGPT, what is this?
25
But this is what it looks like, and it’s very, very easy to use.
26
Unlike other tools that we’ve just seen in Midjourn, this is very much chat-based, back
27
and forward, as opposed to instruction-based Midjourn.
28
So let me go onto our page, and let’s just grab, actually, while I’m in DALI, let’s go
29
and grab one of our prompts.
30
I already did a Vintage Robot.
31
Let’s do a futuristic skyline at sunset, high detail, neon colors, cinematic perspective.
32
Okay, that sounds really nice.
33
Let’s do that, and let’s go in.
34
Let’s just paste that in, and I think the best way to show you is to do it, and then
35
we’ll go through tools and things that you can do.
36
You can see what you get generated inside here is one image, unlike where you get the
37
fourfold inside Midjourn, you’ll see some other tools do that too.
38
Now I can click on this image.
39
Before that, obviously, I can copy it, I can rate it to help them out.
40
I can read aloud.
41
Here is the cinematic depiction of a futuristic city skyline at sunset with high detail and
42
vibrant neon colors.
43
Let me know if you’d like any changes or enhancements. Okay. Thanks, Dali.
44
I really appreciate it.
45
It’s a really nice looking image.
46
Let me make that bigger for you.
47
Really good looking.
48
Look at these colors. Wow.
49
That is really nice coming through.
50
For this prompt also, this is really good.
51
All these, here’s like, it’s gone ahead and done, rather than have trees because we’re
52
in a city, it’s actually generated like holographic trees as if that’s what people have in the future.
53
You can see that coming in with the sunset and then lying down here.
54
This is a really nice, really nice image.
55
Now there’s some things we can do here if I just close that while I’m in here.
56
If I want to, I could download up on this button here, or I could reprompt again if
57
I need to, by just doing the same prompt over.
58
Now when you come into here is where you’re going to get some of the more advanced tools.
59
Although now we’ve looked at Midjourn, none of this will look crazy advanced.
60
If I click on this, I’ll hover over it, it’s just called the select tool.
61
I can take this and I could do things like, let’s do this, let’s get rid of that.
62
So let me just put in remove and let’s run that.
63
Now when there’s updates right here, over here on the chat, you can see if I click this,
64
that’s removed exactly what I wanted to do.
65
Let’s try something else.
66
If I put in right here and I say, I want now a giant bird.
67
That’s all I’m giving it, not a lot of information, no color, anything like that.
68
Let’s hit run and let me just see if it can add something as opposed to remove and how
69
accurate that is.
70
Okay, let’s have a little look at that.
71
It looks like it’s done nothing right here.
72
Absolutely nothing.
73
You’ll find this with all AI tools, if you’re doing in painting, Midjourn, my favorite is no exception.
74
You may have to do multiple iterations or try some different wording.
75
I’m going to try that again.
76
Let’s do here and I’m just going to put a bird flying in the sky.
77
Make sure I am correcting that. Okay.
78
Oh, I still did breed instead of bird typo, but AI should be, I don’t understand what I mean.
79
I’ve now done, I did several more attempts at trying to in paint and add that. And it didn’t.
80
What this is good at, chat GPT is removing things so that flawlessly, effortlessly, if
81
I go back to the first one right away, the sky looks better.
82
Removing in painting to add something is not great using the select tool.
83
But let’s see if I take this, I take this image here and I’m going to ask it to change
84
completely now or somewhat.
85
Let’s see if it’s any good at that.
86
So I’m going to say, change this image to night time, black sky. Okay.
87
Much better as opposed to in painting.
88
Let me click that apart from is added.
89
Looks like it’s got some, I don’t know what this is like computer screen and grid effect
90
and it didn’t change it completely.
91
It’s like it misread some of this and added some more buildings, but you could do it again
92
and again and again.
93
So it did understand.
94
And that’s the base of a conversational based AI model like chat GPT, open AI is, and the
95
greatness that you can use it with images.
96
So if you’re more prone to want to use a chat based back and forward model, then this is
97
definitely that as opposed to instructional based models like we’ve been playing with in mid journey.
98
Now, the other thing, of course, as opposed to in some mid journey or lots of the other
99
models I’m going to be showing you, you don’t select here, the orientation, you tell it.
100
So when I ask for, let’s do another one while I’m on here, shall we?
101
Let’s do a surreal underwater scene, mystical lighting.
102
Let’s take that one.
103
Let me paste that in.
104
And then I’m going to say aspect ratio one, one.
105
So I want a square image as opposed to this one, which is 16, nine, I want a square image.
106
So it definitely understood as opposed to selecting a setting here to tell it, I can
107
just tell it in conversation, put this in a one, one aspect ratio.
108
Here’s a surreal underwater scene featuring floating castles, giant deliphis with mystical
109
lighting, dreamlike atmosphere.
110
It definitely did that.
111
very good at understanding what you mean.
112
Most of the time, what it faults, if it has faults, it’s the in painting tool and being
113
able to change this.
114
But now I’m going to have instructions on here.
115
Let me just do this.
116
And instead of trying to in paint and select, let’s just say add birds to the sky and let’s
117
see what it does.
118
OK, so it slightly changed the image quite a lot.
119
What OpenAI is great for is if you are not needing the finest amount of continuity between shots.
120
If I’m just trying to generate an image for advertising or something, then great.
121
But if I’m looking to turn this into video and have a movie that has continuity between
122
it and characters and things like that, it’s much less collaborative with regards of being
123
able to store a person image, people and be able to generate images to make video eventually.
124
Like you’re going to see we do with mid journey and some other tools, but it is still a great tool.
125
It really has some nice images.
126
It’s just going to be depending on the purpose of what you want it for.
127
Super easy to use already with chat GPT.
128
If you haven’t used chat GPT, just make sure you say give me an image and it’s really good.
129
Now I need to show you all these tools. That’s good.
130
The bad, the ugly, because I don’t know what you everyone watching this course.
131
Some people are going to be wanting to make movies in the same way as I am.
132
Some people are going to just want to make really short 15 second, five second even ads
133
where you just need a single image and or several images and you don’t need continuity behind it.
134
So chat GPT can definitely, definitely do that.
135
And by the way, if I call Dali chat GPT because it’s part of OpenAI, you know exactly what I mean.
136
A lot of people do that.
137
So head back over onto site and then I’m going to go through some of these noteworthy points
138
that are on here. Aspect ratio.
139
So you don’t directly select the ratio, like I said, like AR, like we might do if you were
140
inside some other platform or select it, you can just tell it I want it in 69 like you saw me do.
141
Dali, you can also tell it I want high detail.
142
I want it realistic.
143
I want it in whatever style you want.
144
You can also do with the lighting composition terms.
145
You can also say close ups, things like this in painting tool.
146
Now, we’ve already played with that and it does have some problems and some issues, but
147
it is very good at removing and getting a realistic background.
148
Some other tools you’ll see you try and remove something and it doesn’t look so clear.
149
It is good for that.
150
Like I mentioned that we’ve already got if I want oil painting or I want clay motion
151
watercolour, that’s perfect.
152
Negative prompts also work.
153
I could say no birds in a shot, for example, and we’d make sure that it didn’t have it.
154
And that sometimes unless I use the prompt in mid journey, it can struggle with and understands
155
moods very well.
156
You saw inside this image how the mood is definitely mystical.
157
It’s very good at understanding that.
158
Let me just do one more generation. Let’s go.
159
You guys didn’t see this earlier, but I was playing with my prompts that I was creating
160
on site for you guys.
161
So a vintage robot playing chess in a cozy library, warm lighting.
162
I want to do that.
163
And then I’m going to do the exact same one in the style of watercolour painting.
164
And let’s compare these.
165
OK, so let’s compare these side by side.
166
Here is the warm lit futuristic robot playing.
167
I don’t know if it’s that futuristic.
168
I would run this again and again playing chess and the chess pieces look pretty good.
169
Almost horses, though, where the king and queen are king queen here, but there’s three of them.
170
So unless they’ve taken there, but they’ve got four pieces right here, plus another one.
171
So you might want to start playing and regenerating this and then in a watercolour painting style.
172
So it’s given me the edge right here.
173
And perhaps we can get some watercolour base here.
174
And yeah, it’s definitely illustrative for sure.
175
Let’s just zoom in here.
176
It’s definitely got an illustration storybook type feel.
177
So yeah, I guess we’d say watercolour, but it’s not washy colour watercolour like perhaps
178
you were imagining, which you could just tell it to reiterate again and again and again.
179
So where Dali is good at is it is very responsive to the styles that you are saying.
180
Actually, I’m going to play with one more.
181
I’m absolutely obsessed with AI and image and video generation.
182
I could be here.
183
In fact, I’ve lost many days just playing with it, playing with it just to see what
184
else I can get and generate.
185
So in the style of anime or anime.
186
OK, let’s have a look into this.
187
So yeah, definitely got more of a comic book, perhaps more illustrative than anime, but
188
perhaps actually. Yeah. Yeah.
189
Actually, he’s done a very good job.
190
The more I look at it, the more I look at all the details. Yeah. Good.
191
OK, so I needed to show you that I’m showing you all of these.
192
It’s not my favourite image generation tool.
193
It’s not going to tear me away from mid journey just yet.
194
And I think that if you keep watching this, you’ll probably think the same if your purpose
195
is to make a video and have full control of scenes and shots based on image production first.
196
But you may have a completely different purpose.
197
So that was Dali.
198
Next, let’s have a look at Gemini.
— Kling: Access to Kling and Subscription Plans —
1
So this is a Kling. I’ve previously in this course had a lecture about Kling before. It
2
wasn’t that big of a tool at the time. It wasn’t really there leading the marketplace
3
but it has in the last year really gone up and the quality is amazing. So I’m going to
4
add, because you can do both image and video inside Kling, I’m going to have some of the
5
image lectures in here, the next several lectures on Kling image and then in the next
6
section if you want to do video you can go over in section 10 and see Kling and how to
7
make video inside Kling. So let’s start off right here and we’ll do image inside Kling
8
then later we’ll get to video. Let’s start and talk about this. So Kling, a great tool
9
for AI image and video and I’m going to show you exactly how this works over the next lectures.
10
Let’s start with access in here, access and plans so you know how to get to Kling and
11
also how much it costs and what you can be expecting to use and also there’s often lots
12
of offers for free and sign up and trials and things like that. So if you just go over
13
to let me show you here in a blank one. If I just search Kling on any browser you’ll
14
see it comes up here app.klingai.com and just go to that and you’re going to come to a page
15
that looks something like this. Now obviously the top here has got things like competitions
16
and things they’re running and updates. These are going to change but generally speaking
17
the layout is going to be something like this when you arrive. You are in the explore tab
18
right here. So this is the layout. The explore tab is great because you can see it’s breaking
19
it down here for me what it thinks I want to see. Also shorts, Kling 2.5 and creatives
20
so you can scroll through and you can see what others are making. So I could click on
21
say for example this looks really good. Let’s click on that and I can see what the prompt
22
was right here. Right here is the detailed prompt. I can recreate this myself by clicking
23
on here. I can see what version they were using and also if it’s published I can see
24
comments about it. Really great because much like some others like Sora you’re able to
25
post this and people can follow what it is that you’re doing. Not quite like a social
26
media network I think that Sora is turning into but it is a place for you to really go
27
and explore. Kling has a really good community for AI and I think that’s one of the reasons
28
it’s grown so much over the last year or so. So back to the layout of where everything
29
is so you’ll arrive on the explore page and you can go through and you can check out and
30
get inspired by any of these. Now I can also search right here I can see events follows
31
but what you really want to know is down here I can see image video avatar and effects.
32
So these are what the obvious what they are creating image with AI which we’re going to
33
do now video of AI avatar someone speaking you speaking someone you’ve created and effects
34
they’re also down here on the side image video and then you can just hit here and see more
35
you can see all tools if I click here and we’ll get a dashboard of all tools available
36
here. So we’re going to in a moment go through image and create image you can see some other
37
things I’ve been doing right here and we’re going to go into video after that within this
38
we go text to image image reference restyle and also we’ll do text to video image to video
39
multi elements lots of great stuff coming up but you want to know how much all this
40
costs so if I come over here I can see the amount of credits I’ve got and I can show
41
you some of the plans now once again I’m going to let you know that this is going to change
42
over time obviously. So let me go back to my plan standard yearly monthly I like to
43
always do monthly because you can start it and then you can cancel it. So for example
44
I think right now I’m on the standard plan it says right here so standard plan $8.80
45
a month right now again these will change depending on where you are in the world if
46
they have offers on right now promotions and stuff and you can see what it is that you
47
get for example the difference this is a trial so you can try it for free the difference
48
between these are priority access to new features and the amount of credits that you get I get
49
660 credits a month then it jumps up for what is it triple the price I guess it jumps up
50
way more than triple the credits and it goes up and it’ll depend on how much you want to
51
use this but you can always start on the standard plan monthly I just trial it and then I cancel
52
it straight away so it doesn’t bill me next month and then you can always add credits
53
on or you can upgrade to a different plan if you find you love this tool you’re getting
54
on with it and you want to access more you can also if you’re in anywhere and I’m running
55
out of credits I can also just purchase credit plans so I can add on 660 credits for 10 bucks
56
48,000 to 600 bucks that’s a lot of credits so it depends on how much you’re going to
57
use this and how much it costs let’s go to video for this example you can see right now
58
if I was to do text a video and I want a man a man walking for example I’m going to
59
have it as five seconds is 25 credits 10 seconds 50 credits you can see if I wanted 10 seconds
60
and I did 10 of these that’d be 500 credits and on that basic plan I got 660 credits so
61
that’s not that much but if you wanted to have five seconds then obviously you’re going
62
to get double that and it might be that you want a bigger plan they’re all up to you it
63
depends on how much you’re going to use this how many shots you need if you’re making a full movie
64
with AI you’re going to need a lot more if you are making just shorts or something then each
65
clip might be a short unto itself for one credit or 25 or 50 credits depending on what you’re
66
using so those are the plans go and check them out for what it is that you’re needing on a workout
67
well if I get 25 credit for one five second video how many I can get for 3,000 credits if I’m buying
68
it by the year or by the month I could pay 25 for this month cancel it straight away and then
69
you’re not build next month’s you can trial it and work it out see whether it’s worth it for you
70
and your projects but this is an all-in-one tool that has everything from image video sounds audio
71
avatar whatever it else you want to do it’s a really fantastic tool I’m going to show you it
72
so now you know how to access it now you know clinging the plans let’s properly go through the
73
layout and all the different features before then in the following three lectures after that I’m
74
going to go into each one of these and we’re going to trial it with several different prompting
75
styles and images so you can see exactly what cling has to offer so let’s go through a layout
76
in the next lecture so you’re not lost and then let’s use this tool
— Kling: Layout & Interface —
1
So the layout in Kling, this is an important lecture because I don’t want anyone to be lost and miss
2
features and things that are here and I briefly went through some of it in the last lecture but
3
let’s go through this systematically to make sure you know where everything is.
4
If you land on the explore page looks like this most of you will then you’ve got your toolbars down
5
the edge here and along here for prompting and creation image video that’s mostly what we’re talking
6
about here image we’re going to do that first.
7
So if I come into image right here let’s click that now you have your prompting bar here this is
8
where all your generations get displayed and I can either do I only want to see images I want to see
9
videos only want to see audio or I can see everything prompting bar always down here settings at the
10
bottom and at the top here you have your different ways that we can create image.
11
So we’ve got text to image image reference that’s where we put in an image and want to reference it
12
or restyle put in an image and then restyle it to something else.
13
So these are what we’re going to work with here now as a general rule right there just choose the
14
most advanced you could go to older versions if you wanted to but I don’t see the point they get better and better.
15
So if you have the option right here just always stick to the newest by the time you’re watching
16
this there might be 2.2 or 3 available just choose the most recent version of this.
17
So this is image you’ll then want to choose of course down here now do I want it to be 69 for
18
YouTube I’m making one one square format for Facebook Instagram or something and I’m making a short
19
for TikTok and then I can choose the number of outputs this will be on credits of course if I’m
20
doing four outputs it’ll be four times the credit doing one output then it’ll be a lot less.
21
For example text image if I go a man and I can see that’s going to cost me one credit right there if
22
I do five it’s going to cost me five credits as simple as that and high res or standard why would
23
you not choose high res considering the credit generation is the same unless that’s the look you were going for.
24
That was image here’s where everything’s displayed now this will be the same regardless of whether
25
you are in a video or image so if I come back over here to video generator I can see once again at
26
the top I’ve got text to video I can text prompt here my settings 5 seconds or 10 seconds what
27
format do I want it in one output or three outputs or four outputs at a time which again will cost more credits.
28
Image to video I put in an image and I turn it to video I can choose to start the end we’ll get to
29
that later or multi elements I can upload elements of a video and say I want this object to upload
30
in this person upload in this scene which I upload and again it’s all here on the left hand side but
31
do break this down videos audio if you’re generating a lot so that you don’t just get muddied with
32
everything and you’re trying to find what it is that you want on the side here.
33
Now there are other things on the left so there are AI sound effects generator so if I wanted I
34
don’t know a police siren I could prompt for that and I could generate it let’s see how much that is
35
a police siren doesn’t really matter what I typed in here for this for output to be for credit so
36
not expensive as far as credit goes and they’re 10 seconds worth which is really really good.
37
Now there’s other things like AI virtual try on generator which I’m not using and it’s probably not
38
something you’ll be very interested in using for the sake of this course avatar get to later where
39
we can upload and create an avatar or we can use one that’s already here and you can have a talking
40
head for an explainer video or we can see the AI baby videos which are really popular right here.
41
There’s some really nice ones on there and that depends on what kind of videos you are creating of
42
course and at the bottom this clock symbol is AI video extender so if I have a video I can select it
43
from my history say I created this one which is a 5 second video I can click on that video right
44
here let’s click this one I created of myself and I can extend it 5 seconds extend one output 35
45
credits if I wanted to.
46
Now if I go back to explore and I can see all the assets I can come down click this arrow if you
47
don’t see it depending on the size of your screen all the tools and I can see I’ve got image
48
generation video generation clean lab which is a pretty much exactly the same as this but it’s a
49
space dedicated for you to be able to create things and to ideas move out just different format for
50
working with I never really use this.
51
You got effects so you could change the effects of say this building which is realism and it’s
52
turned into an anime style up here virtual model I showed you a custom model sound generation I
53
showed you image editing we’ll get into all of this we’re not all of this some of these are not
54
needed for the purpose of this course but just so you’re aware there were other things on here this
55
is that outfit try trying out I’m not sure how popular that is on here and also extend which we showed you.
56
A lot of the features for example inside here if I’m extending elements or when I’m generated a
57
video already once I’ve generated this video for example I can lip sync I can add a sound add elements in there.
58
If they’re not available on here you’ll find them back on the if I go back to the explore or assets
59
and I can go to all tools and you can find them individually listed here but I’m going to go through
60
these step by step there won’t be anything left out when you go into image you’re going to
61
understand exactly what everything everything I’m using as I touch every part of this page so just follow along.
62
That’s the layout so you’re not lost sometimes when you see me clicking through creating things the
63
next few lectures you’ll think where the heck was down there well I’m either here here or here and
64
then on left side right side that’s the layout of everything so next we’re going to stick to image
65
we’re going to do text to image your texting prompt and how we like to use AI to get the best prompt
66
for this and all the different settings we can test some out.
67
So let’s start with that over in the next lecture also some exciting things you can see me playing
68
with stuff here some exciting things that you can do after you’ve already generated your image to
69
keep on editing that and turn it into video even or very or even in paint or coming up so I’ll see
70
you on the next lecture where we’ll start with text to image.
— Kling: Text to Image Generation with Kling + Editing/In painting/Upscaling —
1
So onto image, we’ll start with creating an image. So remember if we’re here on the explore
2
page, the main page you come to, I can either click down here to image and it will open
3
up this page where you can prompt for the image or where all your other generations
4
are here. Or if I go back, if you’re on this page, you can come over here to image on the
5
left and you’ll be presented with the same page. Now, like I said before, let’s just
6
keep this on the most recent version, whichever that is for you at the time you’re doing this.
7
And this is where you prompt. So you’ve got text prompt, image reference, restart. These
8
are the next two lectures. So the first one, let’s do this text to image, which is quite
9
obvious what that is. We’re going to use a text prompt to prompt for what it is that
10
we want in our image. And there’s a certain style and way to do this, but also there’s
11
a great hack right here. So with prompting, let’s begin with that. I’ll do the settings
12
after. Let’s prompt for an image. Now there are lots of different theories of prompting
13
and I’ve gone over loads over time, but that’s sort of becoming a thing of the past with
14
aids like deep seek in here and AI, you can pretty much be really conversational in here.
15
I could just be like a man wearing a coat walking through London street. I could even
16
be like, he looks kind of sad. I can, I can say something like this really conversational
17
and I can actually use deep seek right here, which is an AI platform, much like chat GPT
18
or others. And it will come up with a prompt that’s much better here. A man wearing a long
19
coat walks alone in a rainy London street, head slightly lowered, shoulders hunched over
20
against the cold puddles, reflecting dim streetlights. I could use something like that. Now I do have
21
a formula here that I like to think of and I, I fill out the blanks and it’s part of,
22
it has five steps to it. So the first one I have right here is overview and style. Let’s
23
put a full stop after that and I’ll explain it. Then I do subject and costume full stop.
24
The third one is setting and conditions. The fourth one action that’s often of the
25
subject, it could be of the object. And then the last one is camera movement. Now I like
26
to do this. Oh, let me just click that. I like to have these five points here. So what
27
I mean by an overview on style is for example, if I want, and let’s do this, let’s do a man
28
is walking through Times Square, realistic realism. Sometimes I just double, double book
29
myself there and I say two words. Okay. So that’s just an overview of what’s going to
30
happen. Now the subject and costume, for example, the subject I’m going to, I’ve already said
31
it’s a man, but I need to say a man age 25 ethnicity. He’s white. What’s his hair like
32
short brown hair. You can go really in depth here if you want to say and say blue eyes.
33
Now what’s he wearing? He’s wearing a long winter puffer coat, black, go into details.
34
You want a scarf and everything else, hat, gloves, whatever it is that you want. And
35
that’s the costume part right here. So go into as much detail, or quite often I like
36
to just leave it up to AI and then come up with something maybe even better than I can
37
think of. So the setting and conditions, what I mean by the setting is I’m going to
38
say Times Square, New York, and then I could say stuff like a neon signs, although I don’t
39
really need to, cause it’s pretty obvious what Times Square is, but you can go into
40
more details like that. And the conditions, I mean by that weather pretty much and time
41
of day. So I can say daytime, winter, light snow. Great. So action of the subject. Now
42
this is an image we’re going to create here, not a video. If we’re doing video, then we
43
would need to have quite a bit of action in here. But I can just say, man is walking and
44
we see him from front, I can say, and then camera movement. Now this is more for video.
45
So I’m just going to raise that for image. But when we come to that later, do remember
46
that what I like to do right here, you can put it in the last point is say where you’re
47
viewing it from. Now you don’t have to say movement, but you could say I can replace
48
that for camera angle. So I could say low angle, high angle as you’re shooting from
49
above from below from the side if you want to decide profile, but I’ve said here we see
50
him from front. Okay, now I can just leave that right there. And that’s a pretty good
51
prompt. I’m going to copy that to make sure I’ve got it. I can hit deep seek here. Let’s
52
do that. And I would always do that because clean and deep seek are intertwined. So they
53
will know exactly the correct style of prompting that’s been trained over and over again to
54
give the best upload that you want here. So it says, for example, here, front view perspective,
55
25 year old white male with cropped brown hair and blue eyes dressed in a long black
56
winter coat striding through light snowfall in daytime Times Square hyper realistic rendering.
57
And there are lots of different versions like this really good. And I can just hit
58
generate. I can also right here upload something if I want to upload a reference and deep thinking
59
can then I can upload here and click deep thinking and it will give you exactly the
60
prompt for that kind of image that will come later. We’ll talk about that next when we
61
talk about image reference and things more. But it is an option right here. So I can either
62
now hit generate over here. I can hit generate right here. Five credits we can see. Before
63
that, I want to make sure I’ve got this how I want it. So do I want it 916 portrait for
64
shorts or something? I actually want it 16 nine. So it’s more like a YouTube video like
65
an old TV style. And then do I want how many outputs do I want? Let’s just do four outputs
66
for this is fine. High res. Yes. So I can generate it right here. See, that’s also changed
67
to four credits there. So let’s hit generate. And then if I just close this deep seek panel,
68
you can see that it’s generating right here. And it shouldn’t take too long. Now, actually,
69
while we’re waiting for that, I can just do I want to do exactly the same thing, but in
70
a completely different style. If you’ve got a phone with a realistic style, I’m going
71
to say in the style of anime. So we’ve got an animation a 25 year old male with short
72
brown hair, blue eyes will do Times Square knew when a long winter puffer jacket coat
73
like snowfall in dynamic view from the front style. Now I can if you did, by the way, if
74
you didn’t know what you wanted to generate, you could just screen through here and it’ll
75
give you lots of different, different ideas. And you can hit that. You can also go to styles
76
right here, if I click this, and it will have featured stars like old retro cartoon, photography,
77
random, Ghibli studio, which is pretty much what I want right now, an anime style anime
78
right here. So I can actually just click that right there. And it’s got Ghibli right there.
79
Studio Ghibli is by the way, if you don’t know, spirited away those great anime films
80
in the anime style, but a little bit more romanticized in some of his imagery and color
81
and things like that. And there are loads on here. So if you don’t know what the style
82
is, you could go through it and see, oh, I actually want it in a squishy toy style, or
83
I want it in a colorful dream style or pixel art or something. Or this one’s quite nice
84
healing Japanese anime, that’s quite nice. So I can run this, let’s do again the same
85
format and everything for versions. And let’s generate that. And I can show you the other
86
ones that’s already generated right here. So these are finished. These are the generations.
87
Why are they correct? Let’s take a closer look at some of these. So I’ve got a male
88
25 years old, dark hairs, he got blue eyes, perhaps I can’t really see in this. If I scroll
89
through some of the others, perhaps he got blue eyes, perhaps he hasn’t, I could prompt
90
for that again, definitely in Times Square. He’s walking in the road. What did it say
91
right here walking through Times Square is walking in the middle of the street here,
92
as opposed to on the sidewalk. So when you’re doing your settings, Times Square, you can
93
also say sidewalk, sometimes anime does this, like he’s walking straight down the middle
94
of the road. But it looks really good. It’s definitely realistic, really, really nice.
95
Now I’ve got options here, I can either generate a video from that, which we’re going to do
96
in few lectures time, when I go to generate there, I can in paint this, I can vary it.
97
So let’s do that I can click vary two credits. And it’s going to make some other variations
98
of this, it’ll come up top in a moment, I can set as a reference. So I can use this
99
as a reference for images, get to that in a bit, I can remove it altogether upscale
100
it to make it really good quality when you want to download it in paint. So if I click
101
in paint here, let this pop up. And I can either move this around if I needed to move
102
it on my in paint, I can box selection. So if I wanted to do this, and then I could say,
103
let’s, for example, let’s remove car, or I could do brush selection in exactly the
104
same way. And I could brush out this right there, remove the car, let’s hit that over
105
there. Great. Or eraser, if I wanted to just erase people, let’s erase those two there.
106
Let me just make sure I brush selected all of this also use the eraser to remove work
107
you’ve done there. In paint, I’m going to get four outputs. Let’s go in paint. Great.
108
And in the same way, I can also edit this. And you can go straight back to your prompt
109
and you can work on that. Let’s have a look at some of those other things that we are
110
waiting for. So this is the anime style. I really like this exactly the same prompt in
111
an anime, very realistic, really nice. Let’s scroll through some of these. I don’t mean
112
realistic in the realistic sense. I mean, just like an anime less so that one, I think.
113
But definitely look at these top ones. That’s really nice. That’s really good. And he is
114
actually walking on a sidewalk. It looks like he cleared through Times Square as opposed
115
to down the middle of a road. That’s really, really nice. Have a look at some of these
116
edits right here. So this was my very member. I click the very on here. So it’s a slight
117
variation you’ll see on the image that we selected. If I show you for reference right
118
here, leave it was this image, this image right here, got a variant of it looks like
119
this. So we pretty much have the same setting, same guy and everything, but slight variation
120
on the way it looks. If you’ve got something close to what you want, you can click, vary
121
and vary that. Now, let’s do some of this in painting. It didn’t remove the cars, but
122
it did change them from a taxi to another car. So you could just reprompt and rather
123
than just say, remove, let’s try that again. Let me go in paint right here. Let me say
124
brush selection, box selection here. Let’s say remove car, see sidewalk and let’s hit
125
that. It’s quite a difficult thing to do, but let’s work with this. Now, let’s go back
126
while this was working and we can see it prompting for those. Let me have a look at any of the
127
other ones right here. I also removed two people here. OK, and let’s see if it’s able
128
to work out this with the removal tool. So once again, it has changed the car and not
129
removed it. It’s not great, the removal tool, if I’m honest for this or the editing tool
130
for stuff like that. Let me go one more try and I’m just going to go to edit and I’m going
131
to go eraser. And now I’ve got erase this, erase, erase, erase, erase, erase, erase,
132
remove car, see sidewalk in paint. And let’s put this to the final test. And still, I have
133
a car there. In painting isn’t great. I would put this in my initial prompt when we’re talking
134
about setting, I would say walking down the sidewalk, see no cars. If you don’t want any
135
cars prompt for it, no cars rather than rely on in painting or editing for that. So the
136
other things you’ve obviously got here is I could generate a video from this. I can
137
click and it will say, hey, do you want to use this image to video? We’ll talk about
138
that later. And I can start there and I can generate that and I can see the guy walking.
139
I would also prompt like man walks or something right here. Actually, let’s just do that while
140
we’re on here. Man walks, camera follows back. And let’s just hit that one there. Now,
141
also, while you’re here, you’ve got other options. Like I can remove this set as a reference
142
upscale it. I’ll show you what upscaling looks like. Looks like the same image, but upscaled.
143
I could also expand this. OK, how do you want to expand? I want to think of this board around
144
here to be how big you want your actual image. So he’s here in the middle of shop. Maybe
145
I wanted it this big. So I want loads more right there. You also have your presetting.
146
So, for example, sixty nine. So keep it on the same preference size that you’re going
147
to be creating with. Or if I was doing sixty nine, this is great. If I’m turning a landscape
148
video into a portrait for a show or something, I can do that. So let’s actually do that one
149
and expand this from YouTube style video to a show, because the old way to do it would
150
be to crop in here, wouldn’t it? And to have a show. But then you’ve got a blurred image
151
because it has to zoom in to do that. No longer do you need to do that with AI. You can just
152
extend the image or the video. You’ll see us do a video later. So we’ve got these generating
153
here. This is the video, probably another 30 seconds. It thinks this is using my reference
154
image and this is an upscale version. So now I want to download that. Just click download.
155
When you have a membership, you can say without watermark, download it and it will download.
156
And it’s a great high quality. I can either then just save that if I wanted to on here.
157
And this is the highest quality that you need. And the last one was here. So we had a landscape
158
shot for YouTube and now I’ve made it into a short 916, if you like, a 916 format. So
159
I can use this for a short if I wanted to. So that is how you create images from your
160
text prompt and then use Deep Seek. Make sure your settings and it’s how you modify everything
161
from in painting, which isn’t my favorite tool inside Kling, but they generate a really,
162
really nice image and all in one tool. You go straight from image to video just like
163
this. Look, this guy’s walking realistic with audio playing in the background as snow’s
164
falling from my image generator. I don’t need one tool and then upload that image inside
165
here to get a video generated all inside one tool and video coming up later. So that was
166
text to image. Do go back and pause on those five points if you want your great prompt
167
advised for me or you could just use Deep Seek with it. But I would always use Deep Seek,
168
I think, alongside your prompt that you’ve set up. If you follow those five markers that was
169
having an overview, subject and costume settings, condition, action of subject and camera movement,
170
then Deep Seek is going to know exactly what to prompt for in the best format. So that was text
171
to image. Let’s now go on to image reference because perhaps you have an image you want to
172
use to bring to life or multiple elements for this. Let’s talk about image reference in the next lecture.
— Kling: Image Reference with Kling —
1
Now moving on to image reference inside Kling. So in the last lecture we did
2
text to image. Let’s move across to image reference. Now this is a really
3
exciting tool because it’s a way that you can get consistency between your
4
images, either objects, people, locations or styles. So there are two options here.
5
You’ve got style, reference and elements. We do these one at a time. Let’s start
6
with style reference. Under here you’ve also got subject, face or entire image. So
7
do I want to use a subject, maybe this this dog here for this example, inside an
8
image and you can see them they put them in different locations. Do I want to take
9
a face, for example this person’s face, and then I want to put those people, that
10
person, in different images just like this. Or do I want to take the entire
11
image. So if I take that image right there and then I just create, maybe I
12
prompted for a different person in a different location but it takes the
13
entire style. So let’s do these shall we. Let’s take a subject right here. Now if I
14
upload this image of me as the subject and analyze it, it’s not just taking my
15
face but it’s going to take my outfit, what I’m wearing, perhaps a little bit
16
I’m sat at the desk. Let’s see what it takes. The first time you do this you may
17
get this warning up here that you’re allowed to use this. Yes that’s
18
an image of me. Only use images that you have permission for or perhaps they’re
19
you. So I can go subject reference. So it set this up and face reference. Let’s do
20
face reference really high. Subject reference, let’s also keep that kind of
21
high. Now if I scroll down I can prompt and I can say this man’s at an old
22
gangster’s desk in an old USA home. Dark wood, dark lighting. I’m thinking kind of
23
Godfather style. Now I can use deep seek right here. Click on that and it can
24
analyze what I’ve said and my image and I can scroll and see. Let me scroll back
25
and see. Based on the character from the reference image, place him seated in a
26
vintage gangster style desk, old American home. Maintain his facial features, short
27
brown hair and white long-sleeved shirt. Replace the background with dark wood
28
lighting, paneling, dim wood, exactly this. Let’s use this prompt right there and
29
let’s generate four outputs in the size that I want just like we did before and
30
let’s run with that. Now the difference between that and face, we’re doing the
31
next one, is that it’s keeping my outfit. It will probably keep almost exactly my
32
facial expressions and hands and everything like that as opposed to
33
taking my face. So it might not be the exact reference that you want to be
34
using inside your style. We’ll see this in a moment. You may want to just use
35
face depending on what this is. If you’re doing as it showed before an
36
image of this dog or an object for example. If you are putting a cup, you
37
have a cup you’re advertising, you want to put a location, a product of some kind of
38
perfume or whatever. So let’s see this. It has taken exactly that. Weird it’s
39
got the lighting around here but it’s taken me and it’s put me in this setting
40
here, dark wood yes, but it’s taking way too much. I think it’s got the lighting,
41
still got my hands like that and everything. So I would use face for this.
42
Let’s go to face and see this. Let’s drop me in again and let it analyze my face.
43
I’m actually going to move this right here. I want to get it on 69 this time.
44
It’s analyzed my face. It also found a face on the screen behind me. Let’s get
45
this reference strength quite high. If I put it too high, it’ll be exactly the
46
same expression. So let’s do that and let’s also put it based on a character
47
reference image. Let me deep-seek that again. So it’s using face instead of the
48
whole image right here. Using the reference image characters face 1920
49
gangsters desk mahogany. Yep let’s use that. Let’s prompt and let’s generate.
50
Okay and here are the images. Let me have a look at these close up here. Yep
51
definitely me there. I’m definitely there. Okay different variations in a gangsters
52
desk. So you can see how you might be using these for that and let’s also take
53
the entire image. So if I wanted to use the entire image of me and I can say
54
reference strength but at this time I could say change the man to a 25 year
55
old woman. But I’m going to use deep-seek so understands what I’m saying. Analyze
56
it and do a better prompt for me. So I can say modify the reference image to
57
subject of 25 year old woman with short brown hair. Yep yep. Okay let’s use
58
that. Let’s use that prompt and let’s generate and see what it’s gonna look
59
like with me at that image of me. So I’ve taken almost exactly the same image. This
60
should be almost identical in image but changed me as the subject. Okay it’s
61
worked it out here. It’s done. It’s changed it slightly different but it’s
62
not I wouldn’t say that was necessarily what I was thinking. So let’s say change
63
me to a woman with long hair. Look similar but change face to be feminine.
64
Okay let’s try with deep-seek again. Okay modify the image of a woman with
65
long brown hair. Yep run that and let’s go. Okay and here it is here really
66
struggling to turn me into a woman. So not ideal for that. I would just be
67
prompting and use as an image reference for this. I’ll show you that in the next
68
part actually. Not great at changing me into that. That’s the limitation I think.
69
I think where it excels is an image generation from text prompt and perhaps
70
using some of these references in a moment and video for cling. Doing things
71
like changing me completely into a woman. Not entirely great for that. So
72
let’s go to elements on here. Elements. Okay let’s drop in the subject. Let’s
73
drop in the scene. Let’s drop in the style. Now this let’s see if we can do
74
this here. If I was to drop in this scene which was the image that we just had of
75
me and let me just grab a female face. So now if I drop in a female’s face right
76
there. Let’s add that. Now this might be a better way to do it and I love this
77
with AI problem-solving. Let’s confirm that is the subject. So now I’ve got my
78
subject and this is the scene and I can also say the style. Why don’t I just drop
79
in the same image here so it understands I want it in this style, this image, this
80
woman’s face and I can prompt. It is optional so let’s not do it the first
81
time and let me do let’s just do two outputs for the sake of speed. Let’s
82
generate that. Okay great and here is look this is exactly the scene because
83
I’ve got even the same screen behind me, the speakers, this is the woman. I love
84
they’ve put the foot like this seeing that the trainer there. Really nice. So
85
this is the woman inside the same set that I’ve done. If I wanted to prompt for
86
exactly that I could say like symmetrical shot, face on, waist up and
87
something like that. But I think that’s the best way to use if you wanted to do
88
something like add a different person into a scene. So perhaps you could have a
89
scene here of anything even a famous scene and you could be dropping your
90
face into a famous scene if you wanted to. What I like to do is if I remove this
91
right here let’s take me and make me the subject right here and then let’s drop a
92
scene like Times Square. Yes that’s me confirm. Yeah that’s Times Square that’s
93
great and let’s do two outputs and let’s generate that and then I also want to
94
take let me just take a different style right here. If I drop in an image of this
95
anime style then I’ve got me Times Square anime style and let’s generate
96
that. So the first one’s finished that’s put me inside Times Square without a
97
style reference on here. Let me take a look at this. Yeah that really is my face
98
that’s even up to the details of my wrinkle right here. Really nice. Let’s
99
take a look at the other one. Yeah and there’s me inside Times Square. Really
100
nice. Alright let’s have a look if I did it in the style. Oh so this is an anime
101
style but I’m not sure it really picked up on that because what it’s done is
102
it’s got animation style right here but almost computer-generated animation
103
style. So you can instead use prompting for that I would say put this in an
104
anime style but I think this is mainly used for things like this if you wanted
105
references to be the same throughout it. For example if I was creating images
106
that I’m then turning to video I could have me in this setting and then me in
107
another setting whatever person in whatever setting and then you’ve made
108
sure when you’re making your images or your video for that you turn it to video
109
then you’ve always got the same character throughout the person doesn’t
110
jump about and change. It’s a really good way to make it have consistency. So that
111
was image to reference. Of course you have all the other options that I showed
112
you in the last lecture with regards in painting, editing, expanding or generate a
113
video from these but that was image reference. Use it sparingly I think it’s
114
a really great tool for consistency but not perfect and much like anything with
115
AI image or video generation it may just take more and more variants and
116
generations to get exactly what you want and testing with different prompting but
117
it is good for consistency for something like this putting me in different
118
locations make sure you have consistent characters throughout. Now the last thing
119
inside image I want to show you is restyle which is really good I think
120
I’ve seen people use this to restyle images people are turning images into
121
anime, anime into realism or animation whatever style that you want Pixar style
122
and making actual movies based on real movies and turn them into animation
123
through images and video it’s really exciting. So let’s talk about restyle in
124
the next lecture.
— Kling: Image Restyle —
Now, the last image lecture for cling right here is restyle previously image referenced, then text
image before that restyle.
I think with cling, the main thing you’re going to want to be using it for is text image.
Image reference is pretty good, not flawless, but no AI tool and I use a lot are completely flawless
and might take more generations to get exactly what you want.
Takes the image a main one restyle is a really interesting one.
It’s very, very simple.
You take an image like, uh, yourself, here’s an image of me, and then I might want that in a completely
different style.
I could either prompt, maybe I want this in anime or 3D cartoon Ghibli, for example.
I can just click this Ghibli and let’s go generate.
Or perhaps I want it in a 3D cartoon and you can see it generate up here.
Restyle to 3D cartoon.
I can actually click that and I can just go through and select what I want.
Let’s generate that.
And you can go through and just generate these into whatever models you want.
Let me wait for these to generate and I will tell you exactly what these are probably used for.
Here’s me in the style of Studio Ghibli.
So there’s me right there.
And here’s me right here.
This pretty much the exact image you might want to reprompt.
Redo this again because the microphone is a bit behind me.
It thinks it’s a part of the chair, maybe, rather than in the foreground.
But you can keep playing with that.
And here’s me inside a 3D cartoon Pixar style, if you like.
And there’s me right there.
Now, why might you want to use these?
Because people might want to turn these into video.
Yes, you can use avatar and everything else, but I could be, for example, generate a video from
this.
Let’s generate a video.
Let’s just do one output.
Let’s do five seconds and let’s generate that right here.
I can also prompt to say man talking waves his hands.
Whatever.
I’m going to just say camera is static.
I don’t want it to move and I can generate that.
Okay, let’s finish generating.
Let me just play this through.
You can see me explaining it.
My desk.
Really nice.
Really good.
Now watch.
If I click lip sync on this we’ll get into more of this on the video section.
I’m just showing you why I think most people are using the image here and changing it in restyle.
If I go text to speech if I go.
Hello and welcome to the AI video course right here.
Now I can choose a voice.
For example, I want a male voice.
Uh, let me play a little bit of David right here if I turn this up.
Hello and welcome to the clingy video course.
Nice.
I just realized that says cling I oh, no, a I let’s do this.
Uh, let me try another one here.
What’s sunny sound like?
Hello, and welcome to the cling A1 video course.
Hello and welcome to the cling A1 video course.
I don’t know why it’s saying A1.
I think because I’ve done I and it thinks like I.
So let me just do a one video course.
Hello and welcome to the cling A1 video course.
It’s really struggling with that.
You can also upload something if I wanted to and I could upload my own thing.
It doesn’t matter.
For the sake of this, let’s add speech.
I’m going to choose where it comes in.
Let’s choose it right from the beginning right here.
Let’s add speech and let that generate okay.
And welcome to the cling A1 video course.
Nice.
So you can see it’s exactly doing that A1 video course.
Nice.
I’m just going to download that and keep that.
So ignore that saying A1.
I could actually type out.
I think maybe because this is Chinese based, it struggles with something like AI, which was funny
because an AI tool, you could spell out a AI or better when you’re doing lip syncing to change it,
using an actual audio that you’re doing and change the voice with that.
We get into that later in the video section, but as far as restarting goes, I think that’s exactly
why people are using this under image right here.
And you can also use this inside avatar.
Again, we’ll get to that later.
So back to image and restyle.
I think that’s why people are using it the most.
It’s a really interesting way to use it.
Or perhaps just wanting to change an image into a different style.
So you can do that.
Use it as much as you want.
You can see whether you need this for your projects to be able to change it.
Just to let you know that is a tool available for whatever your needs are for that.
So that was the image done.
The next time we’re going to be using cling, you’ll see me use this with video AI video Generator.
So until then I’ll see you on the next lecture.
— Kling 3.0 – Image —
1
Now, with Kling 3.0, there’s not just amazing video features, which are covered in that
2
section of video, some really good stuff, especially with the VFX and Omni over here.
3
Be sure to check those out. Also, if I’m over here on Kling, I can also go image generation
4
and it is selected. You can see there was 2.1. Now there’s 3.0. I think from reading
5
about this, it wanted to compete with Nano Banana and things. And I can either just do
6
a text prompt for image or I can do image reference, of course, like any other. And
7
then I can prompt for changes to that image. So let’s do some testing here and see how
8
good the image quality is. Now, I’ve got quite a simple prompt here. Obviously, you’d do
9
a better with your prompting than this man, old man at night smoking a cigarette, sat
10
in front of his simple house, Asian location, downtown Osaka. Close up. I’ve got my shot
11
type. I’ve got the main part. It’s a man at night smoking a cigarette, sat in front of
12
his house, his ethnicity and the location for this. So I’ve got some options here. Of
13
course, I could have one K, two K. Let’s do two K. I would like this to be 69. I just
14
do two outputs, shall we? And let’s hit run to generate that. Now I’m doing a really simple
15
prompt with this. A Tibetan mastiff sat in snow looking at camera. I want to see how
16
it does with fur, animal fur and hair. So let’s run that and generate. In the meantime,
17
this has finished. Let’s have a look at this guy, shall we? OK, here is a man smoking a
18
cigarette. I think the cigarette smoke comes from his hand here, not from here, and slightly
19
morphs. It’s holding right at the end of that. Let’s check out the other one. But this is
20
not a close up, which I prompted for. And the smoke is here, not there. That’s not great
21
if you compare this to something like Nano Banana. The lighting is really nice. And I
22
actually love this kind of Wes Anderson set up. It’s got here sat right in front of the
23
middle of the house. You can see this would be the opening scene. You might want to reprompt
24
and get that smoke better because that’s a really nice laid out image. The color is great
25
and is very realistic. Here looks really realistic. Look at the creases in his face
26
and neck. That’s why I like prompting images for older people, because you get the wrinkles
27
and it has to really work with detail and also with white hair. But the hair texture
28
looks really nice. Blurred background looks great. Even the wrinkles and creases on his
29
skin are really nice. Just this lets it down, which you could reprompt for. But that is
30
a really nice image. Now, let’s look at a Tibetan Mastiff. So I didn’t say it was close
31
up or not. Let’s look at the hair here. That gets a little lost. But this is beautiful.
32
Really nice. Great quality. Let’s take a look at the other one, too. Got a bit of snow.
33
This looks doesn’t look too realistic, but it could be been on his nose bit here, bit
34
there. Just get slightly more. But it is a really nice image. The snow looks hyper realistic.
35
Look at this. Not sure about these glistening bits, but maybe. And he’s got a bit of a track
36
behind him even where he’s been walking that goes past him. Looks like footprints. Maybe
37
his his owner has walked by. But really nice. It does very well with and that’s a hyper
38
nice, realistic image. So let’s do a couple more. Now, I’m not going to prompt in this
39
one for even a composition of shot. I’m going to give it ingredients almost. So cyberpunk
40
style image of an Asian girl outside night, futuristic, futuristic. She is holding an
41
open book. I’ve said there that’s a little bit specific and looking around nervously.
42
So I expect an image of a girl looking around nervously. Open book wants to be nighttime
43
and cyberpunk, futuristic kind of feeling. I don’t know the composition is going to give
44
me. I don’t really mind. I want to test this for the limits to see how this does with darkness,
45
neon lights, streetlights and something quite specific, like holding an open book and looking
46
nervously and how it portrays futuristic, too. That’s always a big one. OK, let’s generate.
47
OK, what’s it giving me here? OK, this is giving me almost animation style. OK, well,
48
it’s a really nice image and the composition low down. The book is open. She does look
49
nervous for sure. I want to reprompt this and get realistic here. So also got kind of
50
a Pixar realist animation, although her hands are realistic, but her face doesn’t. But this
51
angle is super cool in here, generating this nice idea. Let me do this again. I’m now going
52
to put hyper realistic cinematic and it’s generate and see. Now, while that’s generating,
53
I’m also going to do the other thing you might want to do is have image reference. So I’m
54
going to upload an image of me. And in the same way, I’m going to go cyberpunk style
55
image of image one outside night futuristic. He is holding an open book and looking around
56
nervously hyper realistic cinematic. Don’t need to have that. Let’s generate and see
57
if we can add me here into this shot. OK, so it has done a realistic image here. It’s
58
done once again, an animation style. I’ll show you this one first here. So she is looking
59
nervous. It’s a nice low angle shot. The book is open. It is futuristic cyberpunk for sure.
60
Now, this one here night futuristic. She’s wearing futuristic clothing. She’s looking
61
nervous. Open book cyberpunk feel. I don’t know what this is here, but it’s quite nice.
62
I like look at the reflections in the puddle behind on the floor. And they’ve got some
63
more of this futuristic kind of screening that’s happening here. Lights on the wheel.
64
But I’ve definitely got what I’ve wanted. Once again, I didn’t publish. I didn’t prompt
65
sorry for close up or low angle or anything like I wanted to see what it would give me.
66
This definitely looks realistic. The skin looks realistic. The lighting looks great.
67
No morphing on the hands. It is an open book. She’s looking around. That works really well.
68
Really nice. So let’s see. Now we’ve got me cyberpunk. OK, let’s look at these first images
69
here. I’m not looking around nervously. I am looking down. Definitely a cyberpunk feel.
70
Pink hue. Some lighting looks like lights coming from the book, though. I think this
71
next image is better. Let’s see. And look at that. That’s definitely me. I’ve definitely
72
got an open book looking. I don’t look nervous, maybe slightly person of the lips right there.
73
Here looks like a train station or something downtown right in the middle of shot. Nice
74
composition. It is definitely done well to take the image of me and put me there in this
75
location. Really nice. Now, if I want to alter an image that I’ve got right here, let’s take,
76
for example, I like. Let’s go with this one right here. Let’s download this guy. Let’s
77
drop the image in there for reference. There is an image reference right there. So it knows
78
the image I’m talking about. And I’m going to say fix the cigarette to be between his
79
fingers and smoke coming from the end of the cigarette in at image. And let’s run generate
80
and see if it can repair that problem that we had. If you remember, the cigarette is
81
right at the end of his fingertips and the smoke coming from hand. That was an issue
82
that we had right here. And now you can see I’ve got the exact same image again. Let’s
83
take a closer look at those. This time cigarette is still a bit at the end, but the smoke is
84
not coming from his hand. It’s coming from it could easily be coming from the end of
85
the cigarette because it goes over his finger right here. Let’s take a look at the other
86
one right there. And this one, this is lit. So this is not realistic. But you can see
87
how I uploaded that original image. I’ve got the same original image that I had right here.
88
Same guy, same image. Maybe the face slightly changed on him. Let’s check it out again.
89
No, not really. But the smoke is now coming not from the middle of his hand right here,
90
but to here. It’s a nice generation of an image for sure. Really liking the tool right
91
here. You can compare it alongside other things like mid journey, nano banana, probably leaders
92
in image generation. So you could obviously be using this if you wanted to remove something
93
in paint and change as we’ve discussed before. But that was clean image in 3.0. Really nice
94
feel to the images here, especially the night shots. I really like those and the clarity
95
in image, things like wrinkles on face and stuff like that. Love it. Really, really nice.
96
Okay, I’ll see you on another lecture.
— Leonardo AI: Tool Overview & Layout —
1
Now, Leonardo AI, this is a tool that’s becoming very, very popular. It has free versions and
2
paid versions, some features available, like many tools, only on the paid version. I’ll
3
explain those as I go through them. But you get quite a lot of allowance with a free version.
4
So it’s becoming very popular, but also is popular, because the image generations it
5
makes are really, really good. Plus, there’s the motions. You can turn them inside here,
6
like other tools we’ll look at. You can turn them from images into videos. And honestly,
7
the quality is really good. It’s becoming extremely popular online. I see people talk
8
about Leonardo, perhaps even as much as Midjourney or some other tools. So I’ll break this down
9
over a few lectures. We’ll have an intro. I’ll show you all the interface, otherwise
10
it’ll be a bit confusing for you. And then I’ll make characters or images or both. In
11
the next lecture, we’ll break that down. And then I’m going to show you how to get consistent
12
characters because some of you will want to make consistent characters. Obviously, that’s
13
the hardest thing with AI. Make sure we’re getting the same person in all our shots to
14
make a movie. Otherwise, it doesn’t make sense. So on app.leonardo.ai or just Google Leonardo
15
AI, it’ll be the top result. This is what the interface looks like. Again, this may
16
change ever so slightly. They might put different feature guides on top as they rotate and change
17
this. But this is pretty much what you’ll come out like. Here is a quick access, if
18
you like, to the main tools that you’ll want. We’ll concentrate here on image generation.
19
That’s the main one. There’s also upscaler canvas editor, which is like where you can
20
edit your images. I’ll show you that later. And then video and everything else. It’s like
21
a quick version. But also over here, you can access them and then advanced features. Also
22
some cool stuff like the FAQ and the help, which is really good if you’re ever stuck
23
on something. But I should cover everything you need in here. And if you ever see a question
24
mark against stuff, I’ll show you in the next section when we get over onto onto image generation,
25
you see a question mark next to things, then you can get more information about this and
26
how it works and what you need. So at the top over here, this is my this is my profile,
27
I can click and I can access that. Also, here are my tokens. So Leonardo works on a
28
token system. It’s only ever you’ll see when I generate the images, they’re really good
29
at showing you how much every generation is maybe 15 to 20. If I was getting two images
30
across there, more if I’m using different models, different modes, if I’m, if I’m just
31
upscaling something, perhaps only five tokens, but it will vary, of course. So 8500 is the
32
lowest. I’ll show you the plans. Actually, if I click on this, if I show you this, I’m
33
on apprentice $12 a month, I can either pay monthly or pay yearly, and it’s slightly cheaper.
34
But I’m an apprentice. And for me and the projects I use 8500 is enough. It also allows
35
you to do fast and stuff. And you can you can see if you go on to hear exactly what
36
you get the difference between them. And you can make things private, you don’t get watermarks,
37
all these other stuff. And it costs $12 a month. So not a very expensive plan, not a
38
very expensive tool, which is why it’s probably so popular is making it really, really accessible.
39
So I’ll go back and we’ll go over this. So this is the exact layout for everything. Now
40
on here, your featured guides, you can go through and you can understand if I just click
41
consistent characters, it pops up and you can play a video and they show you exactly
42
how to make sure you get consistent characters. When you’re generating things. There’s also
43
things like style referencing, and it’ll show you that. And again, these change over time.
44
It’s really cool to have these feature guides at the top here. If I scroll down here, much
45
like in other tools, it’s kind of a gallery of some of some displays here. Some sorry
46
of some some generations that people have made displayed out like this. And the great
47
thing around it is if I hover, I can then see what the prompt is. If I for example,
48
like this, I can click it, I can see exactly what the prompt was. I think this might have
49
been made by or some of these will be made by the auto prompting that they have inside
50
Leonardo, which I will show you, which is another reason this tool is so good. But if
51
you wanted to, I could just copy that I could do image to motion, I could turn this into
52
a video if I wanted to image to image, remix it, I can have a little look, I’ll even upscale
53
this. But this is a great way to kind of learn prompting inside Leonardo. But this unlike
54
other tools, you don’t really need to learn too much about prompting with Leonardo. And
55
I’ll show you why I can scroll through and have a look at this, I can like some of these
56
so I can put them on also broken down by all photography, animals, anime architecture.
57
So I just want to see some anime images, I can have a look here. And it’s all in the
58
anime style, architecture, buildings and things that I want to see anything that qualifies
59
as a building or architecture. Very good characters, you can see some of the quality they’re getting
60
here are really cool. This is good to really nice look at that. And then the photo realistic
61
stuff on here is actually really good. Wow, that’s a cool image. Okay, so that was everything
62
pretty much on the front here and also on your billing. I think the next thing we’re
63
going to go to and I’ll put it on another lecture coming up in a moment is if I click
64
onto image generation, this is the layout for generating some images. So rather than
65
do it inside this lecture, quickly flip over now you’re familiar with Leonardo AI. And
66
we can go through exactly what some of these are and generating some images.
— Leonardo AI: How to Create Images Step-by-Step —
1
So this page image generation just continuing on from the last lecture where we learn all about the
2
layout and things if I show you image generation that we came on and the layout for this now right
3
up here this is your main part this is where we’re going to be prompting right here we’ll get to
4
that in a moment I’ll go left to right and also like I mentioned people want to know the cost for
5
things it’s showing you exactly with these
6
settings down here how much this generation will cost me so for $12 a month if I got myself 8500 is
7
right up here at the top then you can see it’s costing 24 and I’m going to get four images so it’s
8
not very expensive with regards to how many tokens it cost you
9
let’s start over on the left and go through these so the first thing at the top are the model
10
presets if I just click that it pops up right here these are the different presets for different
11
styles if you like if you think about the models of different styles you can see graphic design you
12
can see concept art and it gives you a little example about what they’re like for example cinematic
13
kino is really nice where you get if I hover over combines kino xl and photo real for great
14
cinematic output so if you’re doing a cinematic ai movie on something real
15
you should select that model but it doesn’t mean you won’t get them on other models if I have a look
16
at some of these images here I’ve generated that’s definitely photo realistic and that was done on
17
flex dev flex dev which is this model right here and again sometimes they will update models remove
18
models rename them even so perhaps when you’re seeing this they might be slightly different it
19
doesn’t really matter because you can see on the front here with the image they’re showing you
20
exactly what it means and also if you hover over them it’ll tell you what the model is meant for
21
so I’m just going to keep this on for example phoenix 1.0 and if I hover over it says new let me go
22
over this leonardo’s proprietary foundational model delivering exceptional prompts adherence and
23
text rendering it’s kind of like the default model so if you want to go with that you can and you
24
can play around with some of these if I was making concept art then perhaps I would use that so
25
that’s the preset models auto enhance if I click on this oh by the way if you want to get rid of
26
this if it’s in the way you don’t want to see it and
27
keep tidying up and needing in some people like that needs in up their workspace just close that
28
there so prompt enhance auto on or off so automatic this is where this is where leonardo kind of
29
stands out other models can do it too but leonardo
30
stands out I can put in a really short prompt like if I was just going to say something like I don’t
31
know I could just say rottweiler close up and what will happen is because this is on auto it will
32
actually change it so that it will change what it is that the prompt is with regards what leonardo
33
responds to the best I didn’t put this prompt in see that huge prompt right there I would have just
34
did something like 25 year old female at a burning man festival looking at camera and it’s changed it
35
to this so well I’ll show you that later I’ll keep this up here and we’ll get to that in a moment I
36
always keep it on auto because you might be prompting how you think best and you’ve used other
37
models leonardo knows itself better than you do I put on auto always turn off and play with it and
38
also on it’s always on and off off exactly that I put automatic it thinks if it needs it it’s going
39
to add it if it doesn’t need it it won’t I keep it on auto style so this is where you can scroll
40
through unfortunately there aren’t little we see in other models there’s
41
little examples of what they are but you can pretty much know what these are something cinematic if
42
it’s creative if it’s fashion graphic design pop art illustration there’s loads on here black and
43
white color photography pro film photography really nice and there’s all these other ones that you
44
can play with I pretty much just leave it as is like this dynamic that’s kind of the default or what
45
it comes to and by the way at the bottom if you ever mess this up you can reset everything to defaults
46
I’d let’s leave it on dynamic and it pretty much always does what I want it to do but you can play
47
with those if you have a particular style for a project that you’re working on contrast is exactly
48
what you think it is I leave it on medium most of the time this for example is a medium contrast
49
image so it’s not too bright and overexposed it’s got some shadows on the side of her face contrast
50
is at medium you can play with this and depending if you need it low or high contrast then then you
51
can but it’s nice that that’s there
52
now the generation mode this will depend let’s watch this if I’m on fast this is going to cost me
53
for four images it’s going to cost me 24 if I go on ultra then it’s going to cost me 104 this is
54
just the generation mode if I hover on fast slower quality generations quality is formally known as
55
alchemy v2 so this is somewhat in the quality there’s 37 fast less quality but you can upscale it
56
after if you wanted to and it might take slightly longer to generate these but if you see
57
it changing in the top right there as I click that is the cost difference then image dimensions
58
exactly what you think it is maybe I’m doing it to three perhaps one one for a Facebook image or if
59
I want to do 69 which I’m going to use for a film that’s your regular this shape right here for
60
YouTube videos and stuff if I click more I can actually set this to however I want it
61
and and change that also it’s got it listed exactly for socials which is really nice devices and
62
film so I could do ultra wide like that and it’s like a western movie isn’t it like this but for me
63
I’m going to keep it on 169 and then I can also do small you can see it right here the bottom it’s
64
basically telling me is it like 2k 4k imagine it like that the amount of quality and if I click on
65
these it doesn’t change oh 1624 and still 24 for medium or large
66
so it doesn’t cost much more to have it as large quality now the number of images are exactly that
67
if I just clean this up I’m getting four generations per these so let’s watch this again the
68
generator the top if I go to then it’s going to cost me 12 go for it’s going to cost me 243 it’s
69
gonna cost me 18 you don’t get a discount for doing more images and then I can change it to 8 for
70
example and you can just choose it to as many as you want to for your plan budget and everything
71
else I pretty much go for because I always like for
72
outputs and I don’t think it’s that expensive for four with regards to how much everything costs now
73
below that is private mode hydro generations from the community feed so you remember when we were on
74
the home page before there was that big list I can hide them on there this is a pro feature only
75
when you are and so is ultra by the way when you are on a paid plan if you’re on free you don’t have
76
the option to do that and then add to collections if I made my own collection so if I’m working on a
77
film I could name this the video’s name I’m working on
78
and then if I ever come back into here I can of course go into and we can use them in the previous
79
screen to be adding to our own collections that’s really good for organization obviously some
80
advanced settings are right here so I quite like some of these photo real is quite nice to have on
81
and that depends what model you’re in so if I’m in like for example Phoenix I can’t do it let’s go
82
on to this mode right here for keynote cinematic keynote and I can turn on a photo real and make
83
sure we get some photo
84
realistic images also turn on negative prompt which is nice and I can see if I close this right here
85
my negative prompt for example if I said a rottweiler close up but I don’t want something particular
86
in the background that might come on or I don’t want a dog collar because perhaps it’s going to
87
generate a dog with a collar or I don’t want to see his teeth or I don’t want eyes open or something
88
I can add the negative prompt for that transparency is really nice it means that my image is going
89
to come out with a transparent background meaning you don’t have to take it into an additional
90
tool like Photoshop or something and remove the background I can remove it right here I’m going to
91
turn these off as I go because I’m not going to use them for my example tiling is nice it means that
92
the images can be tiled one at a time for example you see this guy right here it might finish with
93
the other half of the guy or a guy and then that would match this side and top and bottom so they
94
can go side side side side top to bottom top to bottom so that you can use them in design or perhaps
95
if you’re doing a video that just scrolled
96
consistently to the right it would always always tile and match which is really really nice and then
97
you can use fixed seeds which I never really use but to make sure you’re getting fixed styles and
98
things to make sure it’s matching each time so this is pretty much everything I think I need to show
99
you if you’re used to this before there’s legacy mode up here turn that on and off it’s basically
100
the previous version of this not too much different but a little bit they keep this on often with a
101
big update so people if they’re familiar with an old style of a
102
of a site that don’t get put off so they can put it back on to legacy mode and they’ll probably
103
remove that eventually after enough time when they see more people are using it as opposed to the
104
legacy mode just so you are aware and then scroll down here’s all your generations that’s ever going
105
to be have any problems any issues they do have a great chat down here and you can try and get some
106
guides for it and also message and things I’ve never needed it but it’s great that it’s there this
107
is why Leonardo is making itself stand out against others it’s really really user friendly
108
so let’s actually generate something shall we I’m just going to do this Rottweiler close up I’m
109
still I’m going to put it back into Phoenix auto is on so it’s not going to stay as this prompt
110
you’ll see in a moment it’s going to cost me 37 because I’ve got four let’s just do two it’s going
111
to cost me just 19 so I can hit generate right now and you can see it’s working on it in the size
112
that we said large quality two of them it’s private it’s not going to show up on the feed at the front
113
contrast is on medium style dynamic and the auto prompt is on now because that auto prompt was on we
114
can see here it says a tightly framed intimate portrait of a mature Rottweiler’s face showcasing its
115
distinctive facial features a broad slightly wrinkled forehead wide wide black nose it’s finished
116
already in the background almond shaped eyes etc it’s done that and here is my image of a Rottweiler
117
let me just click on one of these and we’ll see it big really nice that’s great and then
118
there’s oh that’s a really nice image look at the quality of this fur that looks really really nice
119
I love this but close this again let’s just go if I hover over some of these by the way the details
120
for regards the the prompt is right here and also here’s showing you details like what I used or the features etc
121
and then I can access some other things you don’t need to use it from here we can actually get it
122
from the image itself if I hover over and just click on this one right here
123
I can edit that image so you see here’s my Rottweiler right here and it looks like a Rottweiler of
124
course and leave that there so if I wanted to do something like click this edit with AI example I
125
could change a coat to red or add a blue hat make it vintage okay so I’m going to say change the eye
126
color to blue which is not usual for a for a dog Rottweiler obviously
127
for two images I can also select them just for the one of them but let’s do it for both of them and
128
then use the fixed seed means it’s using the same image and style it’s not going to change it at all
129
and then I can click generate this going to cost me 19 again let’s hit generate okay and now that’s
130
done that’s about 10 seconds or so so this one you see didn’t change it here but if I click on the
131
next one just with AI sometimes it does sometimes it doesn’t boom here my dog has blue eyes really really nice
132
so this is where perhaps in other features we would have hit an editor like in mid journey erased
133
and then said change our color to blue you can do all with text prompt instructions and it’s pretty
134
responsive I’ve tested this a few times and it’s responsive I love it really good so that’s here
135
that’s the edit now if I hover over here I can pretty much tick this and it selects the image so I
136
can get to it later add it to a file I can download it right here and I can also make the image
137
public because right now remember down here it’s in private mode make it public
138
I can upscale this image let me just click this I can upscale it and I can say exactly what can
139
change the dimensions here if I wanted to make it ultra and if I click advanced and settings I can
140
say hey AI you can make this slightly more creative and change it slightly perhaps they’ll put a
141
color on him or background a bit or his tongue sticking out a bit more I give them permission
142
similarity change that stronger lower details and contrast so I can up the details in that you get
143
even more details if I wanted
144
to and it’s going to cost me 44 to upscale this let me close that so if you need to upscale that
145
image if if you’re finished that’s your final one now I want a great quality version of it that’s
146
how you do that so editing canvas I get to in a moment and then there’s also generate video so we
147
get to this in the video section later I’ll add a Leonardo lecture in there where I can actually
148
turn this into a video if I click the three dots I can click that and it’s got all your stuff right
149
here basically remove the
150
background so now he won’t have this like light blue gray background it’ll have transparent one I
151
can copy it to my clipboard if I want to delete the image describe use an image as guidance in my
152
next image edit canvas get to in a moment and everything I can edit the characters pose which is
153
really nice it’s in beta right now hasn’t been out for ages I can say dogs body turns dogs head
154
turns around etc so I can change the characters pose more likely when you’re using a human and a
155
person that’s very clear of a pose I can
156
change that as opposed to a dog it might struggle just slightly so I guess now I want to go into
157
edit canvas right here let me click this and it brings up this whole editing kind of interface which
158
you see I showed you how to edit with text prompt which is perhaps probably most people’s preference
159
before I can also do other things here in painting and strength number of images it’s going to
160
change this too and let’s go through some of these no I never really use this because I’ve never
161
needed a reason to use this because the
162
text prompt and I showed you previously was so good but if I was to for example I can use a little
163
Sue tool to mark this off let me just do a little bit if I click over here I can do in painting
164
strength how much strength the number of images I want and I can draw this on if I selected this I
165
could for example select the eye right there perhaps this eye right there and then I want to say
166
prompt in here I could change it to blue eyes for example or if I wanted to erase something I could
167
click erase perhaps he doesn’t have that big an
168
ear right here and then use the text prompt to change that I never really like to use this once
169
again like I’ve said I always have the text prompt previously but it is a way to get it definite if
170
it’s not changing the eye color then that’s a way that you can always do it if you need to but have
171
a little play with this not my favorite tool but but but pretty cool not the same as like you know
172
in mid journey when we selected and we erased to do changing you don’t actually really need that
173
because you can just change with the prompting so
174
let’s go back to our images right here and let’s play with a few more things right here these are
175
the more important things that you’re going to want if you’re doing your text prompt and then also
176
there are some other things here I can add an image so right here I can either add an image for a
177
star reference add an image to image if I want to change something or content reference and I can do
178
change if I want consistent characters and things it’s how we do that which I’ll show you in the
179
next lecture but I can upload an image for reference if I needed to for example if I’m getting a
180
star reference for an image I’d
181
say I can click that I can use my generations that I’ve done in the past for example I want an image
182
in this style right there I could choose and confirm that that’s the star you see it’s coming down
183
here and if I click underneath here I can choose it ultra high and it’s the style reference which is
184
great much like we saw in mid journey but I could say a 20 year old man at a music festival looking
185
at camera and if I hit generate it’s going to cost me at 21 because
186
once again I’m getting two images let’s generate that and it should be using the style from this
187
image that I chose and it should be using that style to generate my new image nice and now it’s
188
changed this or what it hasn’t done it’s 20 year old man at music festival looking at camera I
189
didn’t give it any guidance and it doesn’t use when he’s using this as a style guide didn’t use auto
190
prompt to change it so I didn’t tell it what he’s wearing does he have a beard or kind of hair he
191
has etc but it’s using the same kind of styling inside the image let me
192
just get rid of that and if I come over onto the right hand side here I can also choose new random
193
prompt which if I select it it will just make an absolute random prompt with AI a brilliantly
194
vibrant plush phoenix complete with soft feathers shades of fiery red and gray okay and it’s
195
generating that we’ll come back to that and I’ll keep playing while it’s doing that let’s go back
196
onto here so improve my prompt so if I put it in there it can improve it so whilst I was doing that
197
and thumbed it in I could have come over and said improve the prompt edit with
198
AI and describe with AI oh and here’s my red Phoenix a brilliantly plush red Phoenix complex
199
feathers shades fiery and this is the absolute randomness that it made with that texting prompt I
200
don’t know why you would want that perhaps just to play around with and have fun perhaps it shows a
201
Phoenix also because I’m in the model Phoenix I’m unsure but that is how you generate an image and
202
then of course you can download this and use it for or upscale it first if you need to and download
203
it and you could either use it for video
204
in here or you could be using it in something like another tool like runway if you wanted to
205
generate your images here but your video in runway or another tool if you perhaps like that video
206
tool which will come later we’ll talk about all the different tools that there are but you can turn
207
it from here into video that’s going to be in the video section in the next section section 10
208
though once we’ve once we’ve completed all these images so that was Leonardo generated an image but
209
I’m going to show you quickly how to get consistent characters they’re super important if
210
you need a good movie obviously you want to have consistent characters throughout so let’s go in and
211
I’ll just quickly show you that.
— Leonardo AI: Consistent Characters —
1
So character reference inside Leonardo AI, this is important. We’ve seen ways to do it with other
2
AI tools and you’ll see, but this is actually probably one of the best ones for doing so.
3
I’m not in Phoenix, the mode we’re in before. I’ve come over to Cinematic Kino. If I am on,
4
for example, let me show you, if I’m on Phoenix and then to do this, I’ll show you, we go to here
5
and then we won’t get character reference view more. It’s actually coming soon for Phoenix.
6
So let’s go back over into here, click Cinematic Kino, close that. And then in the same way,
7
right next to your prompt bar up here, click that and go character reference. Let’s click this.
8
Now I can upload an image. I can use your gem, my own generation. So if I want to use this guy
9
and put him in a new place I can, or if you have a photo perhaps of yourself or something you have
10
rights to or something you’ve created elsewhere, you can upload it right here, which is great.
11
So that’s how people put themselves in out of space or something like that. But I can just
12
quickly use my generations. Let’s have this guy. He looks happy. Let’s choose him, confirm. And
13
then all I’m going to do is say, let’s do something like this man, this man in the middle of a city
14
street in New York City. Now what I’m going to do whilst I have that, and you saw we did it before
15
when it was choosing the style and now it’s choosing character reference at the bottom here,
16
you can see it’s selected a bit like a mid journey when you saw in the corner, whether it’s got
17
character style image. So I can also choose, Hey, can you just improve my prompt for me? And I can
18
click that it’s working on it. And it says, okay, a strikingly mysterious man standing in the
19
bustling streets of New York City. This is what Leonardo likes. You can just use your one if you
20
want to, but it’s still using this as a character. Now I’m going to say high for his strength,
21
mid it might change him ever so slightly high. It will keep the character extremely similar.
22
So now I can click generate. Now, when would you use this? Obviously throughout your movie,
23
if you have one or two characters and you need them in different places, different scenes,
24
then of course you want to make sure you’ve got the same person in each scene. Otherwise,
25
if you cut from one shot to another and they look different, then it’s going to ruin the movie
26
altogether. Also, if people use this for things like storybooks, when they want the same character
27
in different scenes and a storybook, it works best with realistic and photorealistic people,
28
but it will work on animation and everything else also. So now that’s generated here,
29
we’ve got the same guy right here. It’s giving him some tattoos on the neck,
30
which if I came into editor, I could do that. I could just remove that. Let’s have a look at him.
31
So here’s this guy in New York City. Okay, nice. Did the prompt have anything about detailing?
32
Oh yeah. We’ve intricate tattoos peeking out from under his rolled up sleeves. Let me see. So I
33
could have removed that. Let me see. It’s actually portrayed from the intricate tattoos peeking out
34
of his rolled up sleeves. Did a high definition portrait, immerse themselves in the energy of
35
life. Let me take away that one here. Every detail of the man is meticulously portrayed.
36
Let’s do that and remove the ones about tattoos. Okay. Yep. I think that’s everything. All right.
37
Still classic mode, character high. Let’s generate. And now here’s this guy once again,
38
but without any tattoos on his face, I removed that and it hasn’t got it. It is definitely the
39
same guy right here. Our guys are burning man. And here he is. This one’s a little bit more
40
detailed in the face showing his jaw and things, but yeah, it’s great. It’s definitely my character.
41
So for consistency amongst characters, a lot of people ask this, how do I get it consistent? How
42
do I get it consistent? This is how you should be doing it by using that consistency within the
43
upload that we’re doing, either using your own upload, sorry, or, or something else.
44
And this stays on here. So now I can, so if I want him to go into this cafe or something,
45
I’ll put him in a new place or change the shot type or whatever. I can be doing that all with
46
prompting here. So that’s consistent characters. That’s Leonardo AI. The only other thing we could
47
be doing obviously is turning this into a movie, which is great, which I can show you in the next
48
section, which I will do. And we’ll be saying how much he moves, what it looks like, what it is,
49
what he does, blah, blah, blah, all this other really fun stuff. I’ll show you in the next
50
section. Hope you enjoyed Leonardo. Let’s go and check out another image tool with AI now.
— Gemini Overview: A New Era in AI Imagery —
1
Moving on, I’m going to show you Gemini right here. Now this is probably one of my least
2
favorite as well as chat GPT. I just showed you one of my least favorite perhaps also
3
meta and grox pretty cool with his results. But as far as usability goes, I think that
4
Gemini is probably one of my least favorite but it is free. So I’m going to show you all
5
the options that I think are going to be around. And of course, it all depends on your budget,
6
your project, and what it is that you might like. So I’m going to go over this just quickly.
7
It really won’t take long quick lecture showing you Gemini. Once again, go back over to your
8
page AI video dot school slash AI hyphen image hyphen generation, you have access to this
9
page. And if I just do the drop down on Gemini, it shows you one at the bottom here is the
10
official page for you to just click through and access it. And also some details about
11
Gemini right here. There’s less you’ll see than if I go to stable diffusion. And there’s
12
like step by step stuff you can do with the interface, but Gemini doesn’t have that much
13
to it. So it’s a lot easier to use. Some people prefer it because of that. Also, if you’re
14
looking for a one stop shop, Gemini, as I’ve shown you in the previous sections can do
15
things like you could generate your scripts just like chat GPT, you can ask it for ideas
16
and stuff as well as generate images. So you could have it all as one place if you want
17
to keep everything neat in one tool. But you do have a lot less options when you’re creating
18
video for specializing and really drilling down on images and consistency between images
19
also if you’re creating characters, but let’s copy some of the prompts that we have here.
20
Let’s do a portrait of a young woman detail textures. There we go. Let’s go with that
21
one. I’m going to say generate an image of I’m just going to paste that in. So I do this
22
because obviously I could ask it lots of different things just to let Gemini know that I want
23
an image, punch that in and wait. Oh, I’ve just got this up here. I forgot this about
24
Gemini that sometimes although this isn’t always because I’ve definitely generated people
25
inside this before, sometimes you do get some weird prompts of Gemini I find I have
26
to go try Gemini advanced. So I can show you that just quickly. If I want to generate people.
27
Now there’s a cost of that. Of course, the whole idea of me showing you this is because
28
I want to show you a free model of Gemini if you’re paying, you might as well use something
29
like mid journey that I’m showing you later and have been showing you already. So I’m
30
going to show you the free version. That means you’re going to be limited with your prompts.
31
But perhaps you just want to do city scenes or something perhaps you just want to do images
32
that don’t involve people. And that’s what you need for your project. So let’s do that.
33
An enchanted forest of glowing flowers, misty atmosphere, magical light inspired by fantasy
34
art. Let’s do that one. So generate me an image of let’s paste that in. Okay, and here
35
is the result. Let me click and take a closer look at that. Yeah, definitely got exactly
36
what I want. Fantasy art, mystical forest, everything in there that you asked for the
37
option here to download, obviously, but there’s not a lot of options for if I want to edit
38
this at all. There’s really just copy, report it, I can share it, tell it is good and bad.
39
And that’s pretty much it. It’s all going to have to be unless you’re on the advanced
40
mode, which then I’m not going to show you and subscribe for because you might as well
41
use for the cost of it. Also, one of the better models that are specifically for image, as
42
opposed to this, which is an all broad range conversation AI platform, but we can do things
43
like let me just copy that last page, that last prompt. Let me add that in here. And
44
let me just say, aspect ratio, 16, nine landscape, just so it knows. And let’s see what it does.
45
Okay, let’s take a look. It actually didn’t generate it in 16. Nine. That’s really strange.
46
It did it in one one, whereas the last image generation, let me see this. Also, one one.
47
That’s strange. You’ll find this with Gemini and also even more so with meta. When you
48
go to generate things that they ignore your prompt for size, what do you want? I think
49
maybe because meta when you go on Facebook, where one one is more popular. But let’s see
50
if I can do this again. A 16 nine landscape image of a forest. Let’s keep it real basic.
51
And I’m going to start with the scale right at the start. Once again, one one, you’re
52
just limited with the options right here on Gemini, you keep playing and playing and prompt
53
and prompt. And you could maybe get down to it eventually. But really, unless you’re really
54
struggling to find a model and you want it all in one, you don’t want to pay for a budget,
55
then maybe that’s it. Let me just download this for a second. I always like to and all
56
that’s quite a big image. Yeah, just have a look at the quality that it does on the
57
full download. It’s not bad, actually. Pretty good. Some details here. I could definitely
58
up res this with another tool. I will show you exactly that later in the course. Now
59
what you can do if I click upload image right here and just upload that image we just downloaded.
60
Or of course, you could be using an image that you found elsewhere. Again, it might
61
be a bit funny with people or famous people or something. We take that forest scene that
62
we have here. That’s kind of at dusk sunset. I’m going to say make this image of a forest
63
bright daylight. OK, it’s done. And let’s have a look. Yeah, the exact same. Let’s have
64
a look. This this forest scene and then the one they just generated for me there. The
65
exact same forest scene, but daylight. So Gemini is good at responding to prompts inside
66
the chat as opposed to if you were to set these like we showed you a mid journey in
67
other places. It does respond well. Struggles of aspect ratio. You would struggle with people.
68
You would struggle with continuity between scenes and things. We’re making a video, but
69
it is a tool. It is free. And perhaps if your projects just need landscape images or product
70
images also. Also, if I generate this again, I’ve seen that this time didn’t do it the
71
first time. I’ve got different versions right here. OK, this one right there. This one right
72
here. Fairly similar. And the first draft isn’t loading. And I can say regenerate these
73
drafts. So here it is. Here’s draft two. Much greener, brighter. Here’s draft three. Much
74
more natural daylight in here. And draft one never seems to load. Bit glitchy there
75
from Gemini. But yeah, if you wanted to use this for things like products or if you just
76
needed landscape imagery, it’s a free tool. And in one once if you’re doing it for social
77
medias, TikTok, Facebook or something, you could be asking for a product like a an image
78
of a futuristic iPhone typo. And perhaps you’re making like a mock ad for the new iPhone
79
that’s coming out or like a futuristic phone or something. You know, these fun projects
80
that I’ve seen online, you could definitely get it to generate you stuff like that for
81
products if you wanted to. But like I mentioned, one of my least favorite. And now we go on
82
to maybe my second or third favorite image creation tool. We’re going to talk about stable diffusion.
— Stable Diffusion Overview: AI Art Made Simple —
1
Next I told I want to show you is unlike Gemini and Dali is actually specifically for AI models
2
is known for image and also video and audio and things you can see the options here if
3
you need to come to their go back to our page AI video dot school slash AI image generation
4
and from all the drop downs you can see how to get to all the different tools here and
5
then if unstable diffusion just do the drop down menu at the bottom you can go to the
6
website now first I have to tell you there is a free trial of this for three days but
7
with limited number of credits you can go ahead and play with this and you can see if
8
it’s something you like and what you think about it before you commit I think it’s ten
9
dollars eighty let me actually just go through all the different subscriptions I can show
10
you here so monthly there’s a nine dollar a month for nine hundred credits nineteen
11
dollars a month forty forty nine five thousand five hundred credits or a huge one with twelve
12
thousand credits so it’s completely up to you what it is that you need now unlike those
13
are the tools where I showed you with Dali and Gemini which are more those chat based
14
chat GPT style this is actually somewhere in between actually it’s not quite as technical
15
to use or scary sometimes people think as mid journey which is why mid journey have
16
made a browser version over their discord channel but it is definitely definitely more
17
catered towards a image video audio a lot of people love this and it’s their go to more
18
than mid journey so probably this is going to be the next step for you if you don’t like
19
mid journey to use stable diffusion but let’s have a little play and go with things so for
20
this I’m going to go for image we’ll check out video in the next section for stable diffusion
21
let’s go to image it’s going to give me some details about image generation etc ok let’s
22
try it if I go back onto our page I can see all the details it’s telling me here that
23
we’ve made for you about stable diffusion what it’s about the ideal prompts for it I
24
can copy this prompt right here number three let’s go back into stable diffusion let’s
25
go to this and I’m going to put this in for now so portrait of an elderly man deep wrinkles
26
soft lighting hyper detailed realism 35 millimeter film style let’s put that in and we’re going
27
to play with this and I’ll show you what all the other features are for editing and where
28
everything else is on the page and we follow along after I’ve generated you an image to
29
have a look at okay that was quick stable diffusion is quick let me take a look at
30
this what’s your realism yeah it made me this older gentleman and it’s really cool black
31
background look with some really deep wrinkles just like I said that’s a nice image it really
32
is now what can I do with this image inside table stable diffusion is one one let’s first
33
go to our description panel right here where we would prompt and I can go over here and
34
I can do things like I can adjust the aspect ratio right here so widescreen why 32114569
35
so right now it’s obviously doing this for me I want to just go widescreen for that I
36
can also come over here and go to the style palette this is actually really cool so what
37
you can do is you can upload images for example if I images let me just do one for you okay
38
so this is an image I’ve just uploaded make sure that’s selected click confirm and I’ll
39
see it right here there’s actually an image AI made of me or what it thinks I look like
40
if I was AI generated much more handsome than I am they’re very flattering so I can put
41
that in here and I can now ask it for the same thing if I paste it in portrait of an
42
elderly man deep wrinkles soft light in realism put that in and now it’s going to be using
43
you can’t do things like say the weight of what I wanted but it’s going to be using this
44
as a guide for the image is going to create to me we’re going to see lots of this inside
45
mid journey where we can use images and styling and we can tell it I wanted to have this composition
46
I wanted to have this like color style or lighting or whatever based on other images
47
that I have which you can get from offline or you could generate them yourself which
48
are really handy tools when you’re trying to get images in a certain style okay yeah
49
so I can definitely see 69 as opposed to this version right here what I’ve done is I’ve
50
told it to if I show you the actual image if I just grab it here so this is the image
51
I uploaded right there let me move that to the side and this is the image that is generated
52
you can see it’s used actually use the background almost right here that I’ve used is put center
53
it’s got the same very similar lighting right here I’ve lost some of say the deep wrinkles
54
that I asked for and things but it’s using this image as a reference for everything from
55
the lighting perhaps sometimes even the character you saw before it generated an Asian man and
56
now it’s generated this white guy so it could be me in the future what AI thinks so that’s
57
how you use that tool and if I come back over just to remind you to follow this through
58
if you want to you can go through these step by step and that’s probably how I’m going
59
to teach you this platform to start using different elements on this page so we showed
60
aspect ratio right there which was this button right here and then if I go to the next point
61
using the paint palette icon you can either upload an image if you don’t have any generated
62
images or select generated images in the style you have think of this as your style guide
63
that’s exactly what this is the paint palette I can upload and I can say I want it in this
64
style of this one or you can upload 100 images for example I don’t know if there’s an amount
65
here but say you upload 10 images all in styles you’ve seen online and you select them all
66
now when you’re creating it’s definitely going to use that style which is great we’ll
67
also see something like this in Adobe Firefly later and the great thing about them is they
68
have their own catalog inside Adobe Firefly but you can upload your own images from wherever
69
you want now the next tool is I want to show you if I go down the list so you can follow
70
along generate another image quickly to quickly generate you can click quick generate right
71
here which is here if I hover over generate another image and also reinvent your image
72
so first let’s generate another image what I quite like about is inside here your prompt
73
bar right there is you can see the image that is using to regenerate so whenever you’re
74
doing something if you meant to use something as an example image that you’re using as your
75
style guide and it’s not here and you’ve wondered why it generated something peculiar that look
76
nothing like it probably wasn’t selected if it’s not showing here in your bar okay and
77
then we have our other version right here let me just click yeah that’s nice let’s actually
78
just take a little look in let’s have a look yeah look how nice this hair is on here grazed
79
on the side that’s definitely what I’m getting right now and the detail is really nice stubble
80
just like the image that I showed it of myself so yeah that really really works well so then
81
if I go on to here I can reinvent your image so if I click this button right here there
82
are three options for us to choose from so a new image with the same style let’s just
83
go through this and fully check it out so a new image with the same style I’m taking
84
this away I’m just going to say an elderly woman I’m not giving it much here for the
85
sake of this tutorial but you can see right here what will happen okay and here is the
86
image here let’s bring that up really nice so you can see background center lighting
87
everything is using the exact same style as this so it’s a lot like you’re going to see
88
later when we’re inside here if I wanted this image for example to be in the same style
89
I would click here you’ll see that mid-journey later lots of these tools have that we haven’t
90
seen that so far in Gemini and in Dali and you won’t see it in things like Meta or Grok
91
but it is here because this is a proper AI image creation and video but a proper tool
92
so you’re going to see the difference between these and those other AI tools that just generate
93
images when you want to do them for fun but we’re serious AI image generators here so
94
the next one if I just do the same thing I realize I skipped ahead there if I cover over
95
here reinvent use the latest new image with the same style which is very similar to when
96
I said generate another version I just click this here and I’m getting another image in
97
the same style but we pretty much got the same thing when we just use generate another
98
version right here and then it was the three dots that I just showed you here reinvent
99
with other settings so that first one we showed you here let’s do another the first
100
one was a new image with the same style let’s do a new image with the same structure let’s
101
copy that so this time I could have the same thing same structure same everything this
102
time I want in anime style I could say or let me just say for example this let’s go
103
comic book style and let’s confirm and let’s run that okay that’s finished here and we’ve
104
got a comic book style let me just click in yeah somewhat comic book style more like
105
they’ve done realism here and then put like a filter around here like an outline you can
106
do this kind of thing in Photoshop not so much comic book style also had a man here
107
and we use this one maybe because it’s referencing this original image right here let’s let’s
108
do that again actually let’s do it with this one and this time I’m going to say in anime
109
style let’s confirm that okay let’s have a look here let’s say yeah it could be like
110
modern anime I guess we could be seeing that but we’ve got far too much textures in the
111
skin here I think to be true anime so not perfect but that’s what you’ll find with AI
112
images you will just have to regenerate and regenerate this if I click that again I can
113
just regenerate I love to play with all different models I’m constantly every week when I see
114
a new one to come out I test them play with them I really love it I could spend hours
115
and hours doing it and now I’m showing you the top ones I think that are going to stick
116
around for a while so we can continue and just keep playing with this I could lose days
117
just generating images to see what different platforms do and now look when I regenerated
118
that again now we’re getting more anime style for sure see there’s less texture in the skin
119
still slightly too much realism for my liking but it’s even adding things like anime obviously
120
manga the Japanese predominantly animation style and we definitely look way more anime
121
there so you could just keep regenerating and regenerating you’ll see that’s something
122
that we have to do inside all these AI models to get exactly what we want so the last one
123
under these I want to do is sketch to image which is good so if I want to we can turn
124
this into let’s say a painting or I like doing stuff like a pencil sketch and we can add
125
that pencil sketch and we can confirm let’s take a little look at that now that’s done
126
yeah it’s definitely a pencil sketch it looks more like this looks like noise background
127
from a old TV doesn’t it rather than like a sketch to sketch that with all these dots
128
that would take forever I don’t think that’s quite right but let’s just regenerate that
129
again and let’s see what happens when we do it one more time okay now we’re getting more
130
of a pencil sketch I think I think so and that’s pretty much the same person I think
131
that is yeah slightly older maybe looks a little bit maybe that’s just the style but
132
not quite pencil there’s no pencil lines or anything very good pencil sketch but you get
133
the idea now what happens if I combine things for example if I I know let’s download this
134
right here actually I don’t even need to download if I come into my palette right here everything
135
you’ve generated in here so if I select this and we know we’re using this as a style guide
136
I think it should also take some of the person let’s confirm so this person is now in my
137
bar right here and we can now go let’s go sketch the image and I want this as a painting
138
let’s confirm see if I get the exact same pretty much guy with a painting here we are
139
let’s take a little look yes who’s using the same guy right here between the two as an
140
all painting so that’s how you can get consistency between characters and if I look through definitely
141
got some painting right here definitely in the background very very good painting but
142
it’s given me the feel for sure now let’s come back to the page the last big thing I
143
want to show you is toolbox here use toolbox the toolbox icon and generate images allow
144
you to directly edit the image to upscale in paint erase parts replace background zoom
145
out loads in here this is where you’re going to be doing lots of your work to be changing
146
the image so let’s say I’ve got one let’s choose one that I’m happy with I like this
147
woman here let’s open the toolbox and now I’ve got these five choices we’ll just go
148
through them and go to in paint right here now in paint isn’t always 100% I found you
149
do this in a lot of AI models you may have to do it time and time again but if I just
150
grab the shoulder here let’s in paint this size right here I’m just going to go up here
151
and go a small yellow bird so it’s going to be on her shoulder let’s just add that in
152
right there let’s confirm and now it’s generated and it’s done nothing we’re going to see that
153
time and time again with what happens with AI which is kind of frustrating let’s just
154
adjust the brush size so it’s easier let’s do this let’s let’s adjust the brush size
155
okay let’s select this let me just do this let’s give it a bit more detail and see if
156
it makes a difference a small yellow bird on her shoulder let’s generate that and if
157
not I’m going to try with just the word bird okay this time because I put it in her shoulder
158
we are getting yep definitely there’s some kind of budgie or bird on her shoulder and you could
159
now that’s on there I could just go okay let’s get another version of this and we can see side
160
by side what it does this time okay this one is better look at this one right here with a bird
161
on her shoulder although the feet are lost slightly so we can actually take this image
162
right here and let’s try one of our next tools let’s go into toolbox right there and I could
163
probably go to in paint right there let’s just be like that now what I do in mid-journey that
164
I haven’t tried here is I just say fix feet I do this you’ll see me in mid-journey all the time if
165
they’ve got too many fingers which is common with AI I say fix hand and it manages to fix it pretty
166
much first go so let’s see what happens if I just say fix feet inside stable division for a direct
167
comparison to something I use a lot in mid-journey okay let’s have a look I didn’t tell exactly what
168
to do okay it’s like lifted up one foot but that one’s still hovering you’d have to go in and play
169
with that quite a bit or if I was to use I know let’s go back into here we can start mixing stuff
170
up again I can do this confirm selection and I can say an elderly woman facing camera with a small
171
yellow bird on her shoulder let’s hit that okay let’s see but okay this one’s flat and this one
172
they’ve lifted it so yep this kind of works a little bit better great so you could just do it
173
directly like that once you’ve started generating and fixing you could but it’s not the same woman
174
for sure so let’s just go this and I’m just gonna clear my selection and confirm just to make sure
175
there’s nothing in the bar nothing interfering so now we can go back on to the toolbox sorry
176
let’s click here and let’s go let me show you next let’s do erase the exact opposite right here
177
let me just erase the bird like that and confirm here we go that was super quick that one and it
178
has perfectly removed the bird brilliant exactly what we wanted next all I want to show you inside
179
this is let’s replace the background so replace the background with a description or I could use a
180
reference image if I wanted to upload an image of say the jungle or forest or a desert or something
181
or I can describe it let’s actually do in the jungle and adjust the subject and lighting this
182
is a really good thing here the lighting changes do that if you take it off it might look like the
183
characters not there but perhaps that’s what you want let’s click confirm and now we’ve seen this
184
woman this poor woman you see the light things change ever so slightly still got the same shadow
185
right here and here but it’s definitely a different tone to it and now the poor woman is inside the
186
jungle stable diffusion does do a really good job at this perhaps better I think sometimes the
187
mid-journey trying to change the background isn’t always that easy stable diffusion is really good
188
at that so next I guess I want to do a few more remove background confirm while I’m here I could
189
punch out a few more let’s go to search and recolor I want to recolor her shirt and let’s make it pink
190
confirm stay with that first original image upscale it confirm let’s try search and replace so what do
191
you want to replace the background with space no what if I want to replace her hair with a hat okay
192
confirm let’s have a look at some of these generations so this one was removing the background
193
done a really nice job it’s a bit jagged here I think if I zoom in not great but a pretty good
194
job you could take this away to another platform and just neaten up this cutout if you wanted to
195
pretty good job it’s done there and then I’ve changed her shirt to pink super easy that’s so
196
much easier than if anyone here has ever gone into Photoshop or something and then you try to
197
change the image color of our such a nightmare so that makes it a lot a lot easier this one was for
198
me to upscale it so the image quality for this one compared to say the original let’s have a look
199
just quickly yeah you can definitely see there’s more detail here slightly more vivid almost it
200
feels also there’s some great detail the line the lines around her eyes if I zoom in you can see
201
the detail in hair is still kept that’s really nice and this look even like the effect in her
202
skin right there this gets a little bit more like a painting but that doesn’t quite match pretty —
203
good and around the mouth here really nice really nice so that’s how you do that and then I replaced
204
her hair with a hat I didn’t give any description of a hat and it’s done exactly that the toolbox
205
feature in stable diffusion is really really good it does what it exactly says so many times you’ll
206
see in other tools mid-journey included you may do multiple generations iterations because it
207
didn’t quite do what you wanted to but stable diffusion seems to be and they have some bold
208
statements on their site about it being the leading tool all around and it really does it really does
209
give you a lot of bang for your buck we’re gonna compare this directly to mid-journey at the end
210
when I show you some more features in there I’m just gonna show you image to video let’s confirm
211
now I don’t give it any instructions on what happens but if you just needed to animate something
212
I’ve seen people do this with memes they drop a meme in and animate it and see what happens and
213
just run it again and again and again a super easy way to generate a video if you don’t need
214
any particular direction let’s have a look at that okay let’s have a look at this not great not great
215
in comparison to run where I’m gonna show you where we get far more direction so the camera
216
kind of zooms up and our face melts if that’s what you were going for great but it isn’t really what
217
I would be using I would not be using this tool but stable diffusion does have its own video
218
generation inside here rather than go from image like that you can have much more control I’ll show
219
you that in the video section which the next section of the course but that’s not great but
220
stable diffusion is a great model I think this stable diffusion and then mid journey are the two
221
leading tools we’re gonna see dream studio that uses stability I use a stable diffusion in the
222
next one but perhaps you like this platform and a few less controls and the price for this one more
223
than stable diffusion directly but it’s a market leader alongside mid journey and then also I guess
224
for video creation it stacks up against Pika Hedra Luma and but not quite runway ml yet and
225
Sora when it’s released so it is a very very good tool it could be one if you’d not wanted to use
226
mid journey you could definitely use this but once again go back over to here and if you are lost with
227
any of that that’s the longest probably yeah definitely a section that is the longest overview
228
video I’m gonna give you on at all we’ve got a few directly on mid journey when we get into that but
229
these next ones dream studio firefly and then runway for image meta and grok especially are
230
going to be a lot less in depth and a lot less need to talk about because they have a lot less
231
features to go through so that was an overview of stable diffusion let’s go into dream studio next
232
and check that out
— Dream Studio Overview: A Beginner’s Guide —
1
The next tool I want to show you is Dream Studio.
2
Now the eagle-eyed ones of you will see up here it says Buy Stability AI and we’ve just
3
been playing with Stability AI, haven’t we?
4
Yes, I want to show you this tool because there are a lot of AI tools out there that
5
are sort of think of it as a white label are using another in fact I have an app towards
6
the end of this section that uses eight or so I think it is right now different AI models
7
to kind of culminate everything into one result and what Dream Studio does it use Stability
8
AI but it has a very different layout I want to show you this because there are some differences
9
with regard perhaps cost packages and also just overall layout there are some fewer features
10
but set in a different way some people like this interface a lot better and they got voted
11
very high up on the AI models that people like to use for images so it’s my duty to
12
show you this perhaps it’ll be your AI model of choice so under the website once again
13
under our page AI video score slash AI image generation if I just scroll back up here we
14
can see Dream Studio use the drop down if you want to go to the site it’s linked right
15
there and then also I have details all about Dream Studio as usual ideal prompts and we’ll
16
test some of these prompts shortly and then also here there are basically the rundowns
17
of what you can do step-by-step if you want to follow along and I’ll probably use this
18
model step-by-step as we go through understanding Dream Studio so let me just go back to the
19
Dream Studio page it’s a very simple interface that looks like this you pretty much go if
20
you want to it’s divided up quite nicely I think for usability either generate or edit
21
edit when I have an image after I’ve selected it or generate if you want to generate something
22
the first thing you do here and rather than have to have it in a prompt so you don’t have
23
to say in the style of cyberpunk or whatever it is that you want you can go through and
24
you can choose your style so don’t want to enhance image on an anime photography digital
25
art comic book analog film clay motions quite good cinematic that’s basically deep focus
26
let’s go with that one for now so it’s cinematic a prompt I’m going to just choose one right
27
here close-up portrait of an owl dramatic lighting great stuff okay so I’m putting that
28
on here and then I’ll just generate that actually and then we’ll continue on with the prompts
29
let’s just leave all these as they are a standard let’s dream nice and these are my results
30
it’s actually quite quick Dream Studio so you can see by the aspect ratio we didn’t
31
change anything let me just put all these up here for you to have a look great here’s
32
the image size nice look at that depth of field there the feathers are pretty good not
33
too detailed here but the eyes look the reflection in the eye really good here’s some nice bits
34
especially with the feathers you definitely pass it as real wouldn’t you you’d say this
35
was beautiful photography wildlife photography really really great so you might have noticed
36
up here it’s got my credits and what’s moving down so let’s go into this and Dream Studio
37
works on a credit based system where you can see the number of credits going down to generate
38
those that’s taken it was on 25 so it’s taken just under five credits and I can buy a thousand
39
for ten dollars I can go down and do five dollars I think you can’t go let oh you can
40
you go all the way down to a dollar for a hundred credits so if you want to try this
41
out then you can without spending too much money okay really nice now if I go down here
42
I can go negative prompt and we can do some of this so would you want to avoid anything
43
negative prompt no yellow we saw there’s lots of yellow in this including the eyes so let’s
44
go with that you can also upload an image to create variations of we’ll come back to
45
that in a moment one one was how we had it here I can scroll across either way so I can
46
go tall you can see the logo here towards the tall version for say Instagram tick-tock
47
or across for video YouTube and such like how many image variations of this do you want
48
you can go all the way up to getting ten variations of course that use more credits or down to
49
one there’s also advanced features right here if I scroll down you can tell it different
50
in the width for example if I wanted to manually set this and then let’s go back up to well
51
actually let me generate that first and then we’ll come back up to upload an image let’s
52
see it basically doing it now with the new aspect ratio the new image count and I’ve
53
used the negative prompt dream perfect and you saw my credits go down there from one
54
point two so we’re looking at one point two credits per image generation let me open that
55
up here now I said no yellow instead we got less yellow than perhaps some of the other
56
ones here was yellow here still got yellow in the eyes right there okay but we can try
57
something else I want to get that negative prompt right so so let me go majestic mountain
58
landscape sunrise panoramic view let me go back to dream studio I’m going to use that
59
as my prompt I might as well change it cinematic digital art comic book might be nice let’s
60
try a comic book and then I’m also going to have a negative prompt so majestic mountain
61
landscape at sunrise vibrant colors I want no trees in shot okay which they would definitely
62
put in if I didn’t do it so let’s see if it’s able to respond to this properly nice brilliant
63
so this is done in comic book style there are definitely no trees in shot we got a bit
64
of grass here it looks like but no trees perfect let me just actually do that if I take that
65
out but it may remember based on what I’ve done previously but let’s dream that and see
66
if it puts any trees in here it might not again but if you didn’t want them definitely
67
wouldn’t be you’re going to see obviously how you’d use that oh no it definitely had
68
trees in there I knew it would and we said no trees so you can see the negative prompt
69
is working there so let me just download that for a second I’m going to use that image if
70
I upload an image right here I’ll download the one we had right there that’s the image
71
that we have image strength now you can choose so how much do I want to have the image strength
72
is it going to be entirely based on this image very very close to it or mildly I’m
73
going to do this let’s do that and let’s do but this mountain landscape bright and sunny
74
as opposed to right now we have a sunrise negative prompts let’s do no trees but I am
75
using an image with trees so there’s definitely conflict in prompt here I love to see and
76
test this out let’s see what it says and it says basically that my image prompt is
77
too close to call definitely too close I’m telling it to do the same one at sunrise but
78
I’m using this image with the strength up high and no trees so let’s turn the image
79
strength right down on this actually let’s do the opposite let’s go right up but this
80
time I want to take out the negative prompt and I want to go snowy mountains snow once
81
again I’m conflicting here because I’m saying very high and this but it should look just
82
like this and perhaps it will have snow this time let’s see how dream studio does and just
83
to do this on the exact same thing if I do the image strength very low at five percent
84
let’s dream that so here are my two results this one was at high which were basically
85
pretty much just getting the same image it looks like it’s very bit different in the cloud but it’s
86
very very similar so if you want variations on the same image basically then you could do it
87
that way and then right here when I put it down to five percent for the image strength I’ve got
88
pretty much very similar layout not identical with the mountains but I’ve definitely got snow
89
in here so it works so there are different things I want to show you there are variations you can
90
generate variations just like we had here 1.2 per variation actually I’ll show you here we are
91
another variation snowy mountain in exactly the same style looks great perfect I can also do
92
things like I can edit the image here which we can also do here and I can set as initial image
93
which is if I click that it just populates it right here as opposed to uploading it like we
94
downloaded earlier but you may have an image from Google or something so you may need to upload like
95
I showed you let’s go into the edit right here so this is what pops up when you have the edit page
96
I don’t know why that’s showing me that so small but you can also get rid of this this is all the
97
prompts and everything that I’ve done let’s just move that out the way so I’ve got this let’s zoom
98
that in just to play with some of this I want to show you okay so now I can select and I can move
99
onto an area I’m gonna work with let’s select the razor all right let’s remove those clouds remove
100
remove I don’t want any of these let’s see how you do with this let’s dream that up and what
101
you’ll find happens with so here is the problem of dream studio and I have to show you everything
102
give my honest opinion I’m not sponsored by any image tool or anything is that every time you do
103
this and before I’ve been on the beta version and now on the real version here when you select that
104
it just kind of duplicates on top of of the original image it’s so frustrating and not
105
intuitive for the edit feature if I go back here I can see that it just kind of took this section
106
that squared right there where the selection was let me just remove that and try again and
107
it’s very frustrating if you go in a place like reddit people are always talking about it like
108
can anyone able to use the edit function in dream studio people say they’ve tried and stuff remove
109
make sure you’re not in the beta version but even Google doesn’t help it’s like not quite complete
110
dream studio it will be and it will be a tool that’s coming up and lots of people love it so
111
perhaps you all have less issues with this but it’s definitely a little bit glitchy let me go
112
back here so I’m sure and not really explained any about the select here’s a select here’s a
113
select right here let’s get rid of it let’s get rid of it and let’s go into the edit let’s go
114
erase again all I want to do quite simply is erase a cloud so much less user-friendly than stable
115
diffusion directly I’m not sure why people like this tool but they do so I’ll show you and now
116
I can’t dream once I’ve selected once I’ve selected that which is very frustrating so if I come back
117
and I select this okay my whole image is selected all right I’m selected it’s all here let’s dream
118
and I think it’s just gonna once again take it and place it over the top yeah once again done that so
119
I mean I like to use dream studio before perhaps to generate some images and some different versions
120
it’s very user-friendly beginners especially with this I have used it before for craft clay when I’ve
121
needed it or pixel art because it seems to have those filters in place but it’s still missing
122
something inside the edit feature here you can export it fine and everything else but as far as
123
the eraser goes I’m sure there’s a way and someone can let me know in the comments but it’s just not
124
intuitive I would definitely use one of the other platforms have to show you this because it was
125
asked for and it keeps getting us for dream studio but I would use one of the other platforms right
126
now but this will keep updating and I will update you when it improves if it improves but yeah okay
127
so next we’re gonna go on to Adobe firefly
— Adobe Firefly Overview: A Truly Amazing Tool! —
1
Obviously, I had to show you an Adobe product in here, if you’re working in video or design
2
or anything, you’ll know Adobe, most famously Adobe Photoshop, I use Adobe Premiere Pro
3
to edit, you’ll see that a lot in this course also.
4
And you can right now I’m not signed in, you can see right here.
5
So there is an element of using this with and without an Adobe account, I’m going to
6
do is if I’m not signed in, but I do have access to a full Adobe suite.
7
This is Adobe Firefly.
8
This is their image generation.
9
And soon to be if I scroll down right here, video, I can join the waitlist, I will update
10
that and put when this is available for Firefly for video, that will be in the video section
11
at a time of recording, it’s not available, but I will add that later.
12
To get there, if I go back over to our page, that’s AI video school, slash AI image generation.
13
If I scroll down underneath Adobe Firefly here, you can access there’s the Firefly official page.
14
And you’ll come to something that looks like this, obviously, it might have changed.
15
And you can either just start generating right now here.
16
Or sometimes I like to go through and you can see what the community has been doing.
17
Much like if I go over to mid journey, and we’ve looked at this before, and you can get
18
inspiration, a lot of AI models are doing this, which are really good, because you can
19
see the prompts that they’re using and what’s possible. Really nice.
20
That’s a really nice image right here, isn’t it? Really good.
21
So you can click and you can view that.
22
And it’ll also come up with a prompt and you can start actually editing this image if you wanted to.
23
Okay, so I’m going to scroll down and do it.
24
There’s text to image generative feel, I’m going to show you that inside here.
25
And also I use Photoshop a lot.
26
For that the generative feel is very handy inside the Adobe suite.
27
And then there’s all other things here, create a vector generated template, but we want text to image.
28
So let’s go and generate something right here.
29
Let me come back over and I’m going to just steal one of the example prompts that I’ve got right here.
30
Let’s copy this one portrait of a wise old man renaissance attire, highly detailed dramatic
31
shadows or oil paints.
32
Let’s take a look at that.
33
Paste it in here.
34
And this is what happens.
35
This is what comes out and the editing page, which if I go back over to our site here,
36
you can see I’ve got the points right here, the main ones, which pretty much reference
37
what’s going on down here.
38
So if you want a bullet point list right there and explain them on what they are, I’ve got
39
them for you, but I’ll go through them in this video.
40
Now you can see, and by the way, I think Adobe Firefly is probably the fastest generation
41
of the image tools that we’re using super quick, really, really quick.
42
So if you’re in a hurry, this is the tool for you.
43
Now if I click on any of these, let’s have a little look. Really nice.
44
Got a guy of looks like Asian descent or Polynesian descent or something.
45
Inside this image here, we’ve got a guy here and here and here.
46
This is quite nice that we’ve got a bit of diversity here.
47
A black male looks Asian or Southeast, least, and then white guy and white guy.
48
Because quite often when you generate these, I’m not sure why.
49
I guess maybe because I said Renaissance and it’s drawing from what’s available, what it’s
50
learned in models and things.
51
It quite often gives you white males or females for this.
52
It’s quite nice that Adobe quite often does this.
53
So you’ve got full range of options without prompting for it.
54
So let’s play with some of these and I’ll show you over here, I guess.
55
So if I go to the top, let’s go down these one at a time.
56
You’ve got Adobe Firefly image three and two, three is obviously the most recent at a time
57
of recording this.
58
I always stick with that now.
59
Fast mode on off.
60
It’s so quick, but you’re going to get a slightly lower resolution if you’re doing it quickly
61
than if you’re doing it slowly.
62
But we can always up res this upscale it right here.
63
That’s no problem.
64
Next we have our aspect ratio.
65
It was on one one, that kind of Facebook square format that you want.
66
I’m going to just change that to widescreen for this tutorial and then content type.
67
Do I want a photo? Do I want art?
68
It can be on auto, which means that it’s making a decision for you and it’s guessing based
69
on your prompt and what you want.
70
So I’ve got photo.
71
I could have art here because you’ll see that I asked for an oil painting style, which was
72
conflicting with this prompt right here, because it said photo, but I want it as an oil painting.
73
So it kind of gave me somewhere in between.
74
If we look here, it’s definitely not ultra, ultra realistic, like a renaissance realism
75
painting, I guess.
76
But if I put on art here and I regenerate this, we might get more of a art style.
77
OK, let’s I mean, it’s still a little bit video gamey in some of these.
78
Maybe not this one.
79
That’s quite nice.
80
That’s different altogether.
81
A little bit of expression on his face, but not I mean, we get an ultra realism oil painting,
82
which I guess is is down to renaissance and the renaissance art movement that we are getting in here.
83
And we also said highly detailed.
84
If I go a portrait of a wise old man in renaissance attire and then go oil painting style, it
85
should maybe take away some of that. OK, yeah.
86
Now we’re getting more of perhaps some of a painting style.
87
I’d have to regenerate and keep playing with these now you’re still there.
88
Probably what happened is you can see if I do this and I went on generate is that sometimes
89
we’ve got a pop up suggesting where if I go painting style, although this one looks quite a pain.
90
You see, if I bring my cursor up to here, it has prompt suggestions because I’ve got
91
it on right here to do prompt suggestions.
92
You can turn that on and off and it can say, hey, do you want a portrait of a wise old
93
man renaissance style isolated on a pink studio background, no playing chess, looking
94
for a pensive outdoors, thoughtfully sitting by the window.
95
OK, let’s actually just go with that for a second.
96
We’re still on art here as opposed to photo. Great.
97
And I’m going to generate that and I’m going to do one more thing to try and get more of
98
a painting style.
99
Oh, these are nice, though.
100
Yeah, really nice.
101
But if I go down, we’re coming to this in a moment shortly.
102
If I go down to the style right here, I can browse gallery.
103
Let me keep going.
104
Acrylic and oil.
105
Let’s just choose this one for a second.
106
I’ll come back to this tool in a moment, just giving it a style reference.
107
And this is where this is where Adobe Firefly really sets kind of it apart from perhaps
108
other more of acrylic than oil than than other models.
109
I’ll come back to this and the reference images shortly.
110
So that’s the content style.
111
Let’s put this on photo.
112
I’m going to do a different prompt here.
113
Now, these next two sections, composition, style, reference.
114
These are where we can tell that this is a design software.
115
The Adobe suites, obviously Photoshop, like I said, the most famous product that they
116
have is really meant for designers.
117
Now you’re thinking of the composition.
118
I can upload a reference image if I want to.
119
Let me just do that. Click here. Upload image.
120
I’m going to have this zoom in.
121
There’s a image of a guy right here looking straight at camera right in the middle of shot.
122
Now, this is reference for the composition.
123
So what is going to look like this guy’s over to the right, slightly to the right facing that way.
124
I’ve given it the style reference, the composition, sorry, reference of this, and I can choose the strength. Yep.
125
I want the strength to be way up high.
126
Let’s go with that.
127
Still having a portrait of a wise old man, Renaissance attire, thoughtfully sitting by the window.
128
This would be interesting because that kind of tells me that it’s going to be a different
129
layout to this image that I’ve given it here, but let’s test it out.
130
All right, nice.
131
So you can see because it’s up so strong, it’s taken this as a composition reference exactly.
132
However, because I’ve got, although this is only composition, not the character reference
133
and it’s so strong, it’s actually ignored kind of wise old man and it’s done a young guy.
134
Let’s generate that again.
135
Take away photo and put it on art.
136
Let’s try that again and see if we start getting more of an old man. It’s so fast.
137
I can’t believe it. Yeah.
138
And because now he’s getting older, got a gray beard, still quite young.
139
Let’s move this down to very low, but it still has a reference at least.
140
And let’s compare that.
141
All right, nice.
142
So we’re getting something somewhat similar and then I could go and change this to make
143
him look at camera or change it in the prompt if I wanted to.
144
So that’s composition.
145
Now, if you don’t have a reference, it’s very handy because if you’re trying to this as
146
opposed to you’ll see when we do Meta Grok and when we’ve seen some of the other tools
147
like Dali, Gemini, you have far more control right here of composition as opposed to having
148
to type it inside your prompt there.
149
So if you don’t have an image right there, I can cancel that.
150
You can go browse gallery and you start saying, OK, what kind of framing do I want right here? Like what?
151
What do I want to look like?
152
I want to make this image of a cat. Let’s do that.
153
But let’s still say a wise old man.
154
Let’s remove that last one right there.
155
We’ve got art style.
156
We’ve got this composition and we’ve got a star reference, which was that painting style that we chose.
157
Let’s generate and let’s see how much it takes for this composition where the cat is there. OK, perfect.
158
So we see the cat is over to the left hand side, slightly looking at that 45 degree angle.
159
Now our man and renaissance attire is over there, over on that side.
160
This is really nice because so many times you go, can he just look up, down or especially
161
you’ll see even in mid journey, it’s difficult to get like a high angle or low angle shot.
162
Sometimes they either go really high or really low, like they’re on the floor.
163
If I just find an image from Google or somewhere that matches kind of what I want, or you could
164
even break down an existing scene from a movie and screenshot to do this.
165
Similarly, you can upload this to have your reference for that really nice aspect of the tool.
166
I really like it so we can continue going down.
167
That was composition. Let’s do style.
168
OK, so visual intensity and you can hover over here.
169
This the intensity of your photos and overall characteristics.
170
I’ll show you what it means by that.
171
If I go all the way to the top, I’m going to have it on photo and let’s just do a wise
172
old man for the sake of this.
173
My intensities come all the way up.
174
Let’s move that.
175
And now my intensity, I put it all the way down.
176
Oh, I’ve still got a star reference on here.
177
Let me just take that off.
178
Let me move that all the way up.
179
I’m just saying photo. Nice.
180
So that was intensity all the way up on image.
181
Let’s take a look at that with this nice moody background, dark, nice.
182
Let me move the star visual intensity all the way down on here.
183
And you can see comparatively this to this set.
184
We’ve got way more of a less intense look.
185
This could be even quite dark or sinister, couldn’t it?
186
And this one way more just a studio shot kind of here.
187
That’s moving the intensity.
188
Let me put that background in the middle.
189
Once again, each time you do this, you’ve got a reference that you can set.
190
Now onto effects.
191
I want to show you some of these.
192
I can have a look at all of them if I wanted to.
193
And I can move down, for example, here’s Art Deco, Cubism, all the different art styles.
194
So good having a design tool platform like Adobe that’s come into this space.
195
Because if you’re unaware of what these are called, we know we’ve had our page here on
196
the style guide.
197
Here, you don’t even need a style guide.
198
It’s telling you what it’s like.
199
If I chose Art Deco, I don’t need to know what that is.
200
Or Minimalism, for example.
201
And if I keep scrolling, there’s loads of themes, 3D, anime, cartoon, even techniques,
202
acrylic paint, bold lines, antique photos.
203
It goes on and on and on.
204
Different materials you can use on set.
205
That’s really nice.
206
So let’s do some of these, actually, shall we?
207
Let’s choose, I quite like to, let’s do graffiti.
208
And it’s a photo right now.
209
I want to have it as art, an old wise man graffiti art.
210
Let’s generate and see what it does. Great.
211
So here was a man and he’s covered in paint here.
212
And he’s got like some graffiti in the background. Perfect.
213
You can start seeing how we’re layering these up, depending on what you want.
214
And you can really narrow down.
215
So less concentration on your actual prompt.
216
You would just say who the character is and what’s in shot.
217
And then let the content styles, the effects and everything take over.
218
And lastly, if I just go back on to, I’m going to just call it Bokeh effect.
219
That light focus kind of thing.
220
Let’s go back to color and tone right now.
221
And let’s call it a golden tone that I want.
222
Lighting, let’s call it low lighting, I guess, no, long time exposure.
223
And then camera angle, again, so good, seeing as we’re going to make video.
224
If you already have a package for Adobe Suite, please look at using this.
225
If not, you may use this as your tool.
226
This is probably actually better for me than stable diffusion.
227
I like mid journey, Adobe Firefly and then stable diffusion, probably in that order for
228
image generation. That is.
229
Let’s do a shot from below.
230
So now we have a portrait of a wise old man, Bokeh effect, golden, long time exposure shot from
231
below. Let’s generate.
232
There we have it.
233
That’s exactly it.
234
It’s basically these first two, not this one and this one so much, but it doesn’t miss a beat. I’ve got these.
235
I mean, I don’t know why it generated these lights over me right here.
236
It’s going to here, but I’ve got these that Bokeh effect, which is the blur and lines
237
and things dark.
238
And then we’ve got this golden light that I asked for.
239
And the angle is below.
240
So when you’re trying to generate shots for a story, for example, if you were trying to
241
do an intense scene, so you want it shot from below, like we saw Tarantino doing earlier
242
in the course or shot from above, you wanted to make someone feel small, more or less more
243
insecure or perhaps just insignificant in the in the shot as far as far as the emotion
244
that you’re telling.
245
This is a really nice tool to be able to do that because we’re going to make these all
246
into videos right now inside any of these images.
247
Let me just go back to here.
248
And if I click on this one, for example, I could then upscale it.
249
We’re going to make that in better or I can go edit and we can start doing some other
250
things right here.
251
I’m going to ignore like add text is obviously add text, shape and graphics, which is a design feature.
252
I can use a style reference for something else and other projects, use a composition
253
reference and it will just fill in on the side here where we were before.
254
So for example, if I want this composition reference right here, I come out of it.
255
You see, it’s got my composition reference at the bottom there.
256
But I’m going to show you some editing tools here.
257
It’s got generative full fill and generate similar.
258
This is obviously just to regenerate more of them.
259
But let’s go generative fill.
260
Now this is pretty much where I want to either insert something, remove something or expand
261
something or pan.
262
So good to have these.
263
Let’s go insert.
264
If I just put this on his shoulder here, let’s put something right there.
265
OK, now we’re on add, not subtract or take away.
266
We’re not selecting the background or anything like that.
267
I’m just going to go a small bird on his shoulder, bird on his shoulder and let’s generate really nice.
268
OK, we’ve got a bird on his shoulder here.
269
I’ve got a few different versions of it.
270
There’s a nice chubby bird there.
271
This one’s got a bit of character to him and that one sat there.
272
I really like that.
273
Let’s keep that bird here.
274
Now if I want to remove, for example, let’s remove.
275
I’m going to just take away some of this beard.
276
Perhaps I think the beard is too long.
277
OK, and I’m going to subtract this and then click remove.
278
And now I’ve got these different versions from the original beard back to big one, a
279
small, neater beard like this.
280
OK, smaller beard.
281
I still like a big beard for him.
282
I’m going to keep this one right there.
283
And now to make this really easy for you, if you wanted him in a different place, you
284
could just go select background and it takes away really nice cut out.
285
If you had to do this manually like we had to before doing Photoshop, that would take
286
so long to cut around these fine hairs and around here.
287
And it’s done it for you instantly.
288
I can just click remove and then it’s giving me some different background images.
289
If I if I want to keep one of these, I’m going to cancel that and keep that right there.
290
Let’s go expand if I want to.
291
So I can drag this and make it like that.
292
So if I just wanted this bit, I could zoom in or perhaps I want more of him. Let’s do this.
293
OK, and let’s generate.
294
You notice I’m not given any prompts right here.
295
I could have then set a prompt like that and say arms folded or something and you would
296
see his arms come out.
297
Quite often I like our models to be the artist sometimes and just give me some examples.
298
So let’s see what we got here.
299
Yeah, really good.
300
OK, let’s keep that right there.
301
And then I can free transform this.
302
Keep that as a one one. Do it 69.
303
I could do that and then I could just generate more.
304
Probably going to give me just a black background for either of this.
305
But I’m showing you we’ve got so much control, so much control over the image.
306
It’s a really beautiful tool to use.
307
And that was everything I want to show you in Adobe Firefly.
308
A little bit of overview right there, which you could go way more into depth.
309
Genitive feel very, very handy.
310
And probably one of the best feel is if I use something like Photoshop, which I use quite a bit. Bring that up.
311
Here’s what I says I look like.
312
Very flattering. Love it.
313
And in exactly the same way, if I just zoom that out in a very same way inside genitive
314
feel, I could do this.
315
Put that like there. Let’s do that.
316
And let’s say genitive feel and say coffee cup and then generate.
317
And it’s generated me three versions of a coffee cup.
318
That’s quite a nice one, isn’t it?
319
And I can keep them there or I could obviously get rid of them depending on what I wanted to do.
320
It’s the same genitive feel across the Adobe suite.
321
Really nice software.
322
So much control.
323
If you have a look at the pricing packages, it would differ a lot between country to country.
324
So I can’t show you that.
325
And it would depend if you want the full Adobe suite.
326
If you’re looking for an editor and you want to use Premiere Pro like I do, and then it has this also.
327
And soon it’s going to have video.
328
Adobe might be the all in one package for doing editing for AI.
329
That’s image generation for video editing, and then design or whatever you want to use Photoshop.
330
Or if you just want to use this tool, and then use others, then it’s completely up to you.
331
But just so I show you this, this was probably one of the more exciting ones here in image generation.
332
Apart from perhaps mid journey.
333
Next I want to show you runway image generation.
334
The runway is the tool we’re going to use for video, mostly and it’s the market leader in video.
335
But you can also have the ability to create images.
— Meta AI Overview: Image Creation —
1
Moving on to Meta AI now, a tool I wanted to show you, just forewarning you, it’s not
2
my favourite tool and it probably won’t be yours, all depending on what you need, your
3
requirements for generating AI images. It is far more geared towards a general user
4
to kind of have it inside Facebook and to use images for Facebook to generate things
5
to probably help boost interaction on Facebook, Instagram and such like, as opposed to a video
6
creator like us needing to generate an image, have consistency across images. But it is
7
a tool, it’s free and I have to show you, it’s going to be around for a while and it’s
8
going to keep getting added to us. I will keep updating this when any significant changes
9
and updates happen. Meta is kind of one of the companies I want to mention, the same
10
as we have come up with X, I just want to make you aware of these big companies. So
11
when you come onto the page, this one once again, AI video.schools slash AI image generation,
12
you can see what I’ve written here about Meta. It will not take long for me to show you this
13
platform. And once again, you can click to go to the official page right there. You’ll
14
come to some of the looks like this and you’ll have logged in with your Facebook profile.
15
And on this page, you can see any history, any stuff you’ve been doing before. Here’s
16
some things that I’ve generated previously here inside Meta. And then you can also just,
17
if I show you the main page, you’ll come to lots of different things like you use text
18
based like chat, GPT, help choose me a pet. I haven’t played with this, not something
19
that interests me too much. And then imagine an image so we can imagine an image and we
20
can come up here. And if you don’t want to just go straight into giving yourself a prompt,
21
you can either get some, it’s kind of like inspiration page right here. If I click on
22
one of these, let me select these two aliens reading in a library. And you can see the
23
prompt that they used right here. Imagine aliens reading books in a library has views
24
out to a beautiful forest and is quiet. And this is what they generated. So let’s just
25
generate one of the images here. And I can show you there isn’t much after that to show
26
you if I’m honest, but you can see the image quality for Meta, which is actually pretty
27
good. So if I go paste, let’s add that in there. A magical forest with glowing trees
28
and mist, ethereal lighting, fantasy art style, quality, fancy art style, sorry, quality
29
high. So you can see the images we’re getting here for generations, just like we see inside
30
of mid journey. And you can see these are good, definitely on the illustrative or this
31
one’s kind of a bit more realism, fantasy art. Really nice. The three dots up here,
32
if I want to download, share or report if something comes up that you think shouldn’t
33
and the same of up here. This is quite nice that either inside here, or when you go back
34
to your main page, you can just click animate right here and watch what happens. Okay, we
35
got a bit of movement about four seconds or so of movement coming in here. Just camera
36
just hands and goes down. I obviously, unlike we want have full control over this, it doesn’t
37
give me any animation prompts that I can be using any way to do that. But it can just
38
animate if that’s what you need. If you just need any animation, any movement whatsoever
39
between shots, maybe you needed a collage of these shots, and you just need a small
40
amount of movement to join them together. That would suffice. Now you’ll notice that
41
this is in a one one format right here. And that can be sometimes tricky to get out of
42
inside here. So let me just do a prompt exactly the same again. And let’s do aspect ratio
43
16 nine landscape just so it’s definitely true definitely knows what we’re asking for.
44
And exactly same prompt again, but with the new aspect ratio, and we’ll see the results.
45
And still they are in one one. Now any of you that use social media know that Facebook
46
is heavily favoured in the one one format, the square format that’s in video image, whatever
47
it is just based on the images that they use for anyway, even across the Instagram feed.
48
And Facebook just favours one one is the Facebook format. So it makes it quite difficult if
49
you prompted enough enough enough, you probably could, I’ve never managed it myself, but I
50
don’t really use this platform that much is something I want to show you the images are
51
really nice. Let me just actually try while we’re here, I guess ultra realistic photo
52
realistic image of a man’s face, old bring called our close up. I want to see how it
53
does with realism. We’ve seen fancy art and it’s drawing from obviously, great sources
54
for that that works. But how’s it doing with photo realism? Pretty nice. Actually, the
55
quality is good. Here’s a close up of the close was okay in the face there. Yeah, we’re
56
getting a bit illustrative there. There’s one way more realism but whole ultra kind
57
of stylized look almost plasticky rather than skin. But it does generate a really nice image.
58
This is perhaps a little bit too much. But you can see you’re able to do all kinds of
59
styles right there. And rather than on the other platforms we’ve seen, for example, we
60
did stable diffusion or inside mid journey, there is platforms where you can select, say
61
all styles, for example, stable diffusion and things and I can I can select styles that
62
I want from a style list meta far more conversational style AI, I’d have to tell it for example,
63
if I want this again, but I want it in cyberpunk. Now I do have some conflicting prompts there,
64
I think. But let’s just see how meta codes with that. Okay, yeah, got it. Okay, realist.
65
Definitely on the cyberpunk skew right here. Apart from that one is just generic. I think I’m
66
not sure that a scar hair in the middle of the face there. And then yeah, we’ve got some line
67
here. So it did understand. Very responsive, very good. And that’s pretty much it, I’m afraid,
68
with meta. So you can either animate these, of course, if I want to all of them and see them,
69
that’s pretty much as far as you get. And everything else is going to be based in prompt,
70
there is no in painting. Yet, there is no editing or removal tool or anything like that.
71
But it is for the more casual user. Now, if you just need casual video, I’m not sure you’d be
72
doing this course. But if you did, then it’s an option. And once again, this will, of course,
73
update meta will definitely update this, it’s going to one of the biggest platforms, obviously,
74
I’ve seen it trading and doing really, really well, I follow all the big companies and see
75
how well it’s doing. I think it’s going to just increase and increase and get better and better.
76
So I definitely want to keep it in this course. And I will update it as big changes happen.
77
Stay tuned for that. So next, we’re going to go on to X,
78
we’ve gone from Mark Zuckerberg to Elon Musk, and we’re going to look at Grok.
— Grok Overview: AI Art Simplified —
1
So, exciting. I want to show you Grok. Not a lot of people show this. I think it’s going
2
to be, it’s going to increase over time, be quite a big platform eventually. It’s still
3
in its early days and much like Meta, where we’re playing with that, it’s far more geared
4
towards the casual user to be using inside of X, formerly Twitter. So, if you want to
5
use this, you can. It does have some supposed benefits with some things you may want to
6
generate. I’ll explain shortly. If you want to come over to our page, AI Video School
7
slash AI Image Generation, just as before, if I scroll down on here and drop down, you
8
can read all about it and also the link to X.com, which is quite simple, X.com. And you
9
would need for this though, a premium account. Now, I can’t tell you how much that costs
10
because one, they’ll change and two, you could be watching from anywhere in the world,
11
but starting at a few dollars a month up to a yearly price or a more expensive monthly
12
or yearly price, depending on the different things that you need, but they all have access
13
currently right now to Grok. And Grok right here, you can even access it here inside premium,
14
which is under your settings or right here, you’ll get a Grok icon up here where it’ll
15
just open up like this. Now, you can obviously ask Grok anything just like chat GPT or Meta
16
and you could be using it just to answer queries, questions. You can attach an image and ask,
17
what is this? So we only can try it for image generation also. So it is kind of an all in
18
one as opposed to a specific model used to generate AI images. So you’re going to be
19
limited just like Meta at current or present to be able to get consistency between shots
20
and things. But let’s just go and see what the quality is like. It may be that you want
21
to use this because you are just a casual user and you don’t need too much control
22
depending on your projects or you just like the styles that they come with. So I’m going
23
to just paste this in here and then we can do stuff like talk about styles and aspect
24
resolution, aspect ratio showing and see how this comes out inside Grok. Very quick, by
25
the way, Grok extremely quick. It’s more like a four three format. I didn’t specify it.
26
And I can zoom in here and you can see that the quality is nice. There’s some good texture
27
on there and it understands what you got. Good lighting. I’ve got the reflection of
28
this flame coming here. It even understood to do that whilst it generated in the texture
29
on this rope and on this chain mail. Really nice. I really like the images that come through
30
with Grok. Now, if I said aspect ratio 69 and then paste that in, let’s see what happens.
31
And I can already see that the format is coming through much like unlike other ones that we’ve
32
used where we select, for example, where we’ve been in stable diffusion and we’ve selected
33
what it is that we want. It’s all conversational and all in what you prompt on here. Now, let
34
me open this. Oh, it’s actually not done. Sixteen nine is actually done. That’s definitely
35
not a sixteen nine image. Let’s do it again. And I want to see if I put in landscape right
36
here. Let’s do land escape. And then I’m going to change the word portrait. Maybe it’s getting
37
confused and just say image of a knight in armor, dramatic lighting, ultra realistic
38
detail star Renaissance. This is really nice. I’m not sure about the proportions. This looks
39
like a very heavy midriff in comparison to the size of his head. I might want to regenerate
40
that. If I click on these three dots, I can save the image, copy the image, of course,
41
post this directly to X. And you can also do things like monetize if you have followers
42
and you can start monetizing images or created inside X if that’s what you wanted. Let’s
43
go back to and test something else. If I come back to the page, I’ve written this down.
44
You can obviously do the theme you want, attributes, details, style, mood, technical like aspect
45
ratio, although we’ve just seen we are struggling with that heavily. But also, Elon Musk himself
46
and X really prides itself on freedom of expression, freedom of speech. And because of that, the
47
mandate may enable you to create images without a filter, unlike some other platforms might
48
do. For example, if you needed something, even safe images, for example, if you needed
49
a scene where people were in swimsuits and bikinis and trunks and things like that, if
50
you try and prompt this sometimes with things like mid journey, even though it’s a safe
51
image, sometimes you may get flagged and say, no, I can’t do that because it’s been very
52
safe. Now, X says it is a little bit less safe, of course. Note, and we say nothing
53
illegal, derogatory, dangerous or adult. Don’t do any of that. And it shouldn’t work for
54
it anyway. Don’t do it. But you should be able to. Even when if I go into mid journey
55
and try and say Donald Trump, let’s do this. Let me show you that. Not that I suggest,
56
obviously, if we spoke about ethics millions of times, it’s a personal use. But you can’t
57
be uploading video imagery and stuff like that without some kind of consent. But if
58
I go Donald Trump playing chess, oh, this is actually done at this time of mid journey.
59
Sometimes you will get flagged. Perhaps if I tried to do like Donald Trump smoking a
60
cigar. Let’s try that. Yeah. OK, so this says no. So there are some limitations. For example,
61
you can play chess, can’t smoke a cigar because perhaps the idea that I could be portraying
62
him in some negative light or with that imagery. So let’s go on to Grok, though, and let me
63
go Donald Trump smoking a cigar. OK, and now I’ve got an image here of Donald Trump
64
smoking a cigar. So obviously, some platforms are far more wary and will hold back on you
65
with some generations, even though there’s nothing illegal here with Donald Trump smoking
66
a cigar. You know, you could be manipulating it for something else or to portray him as
67
something doing something that he wouldn’t that some people find negative. So it does
68
have a bit more freedom of expression right here. So I can copy that right here if I want
69
to share it. I can also regenerate just with this. Let’s just see what happens when I regenerate
70
it. Now we’ve got this quite a funny, almost like illustrative caricature we’re getting
71
to here. But they are really nice images. If I zoom in the texture on them, look at
72
the hair and the skin. Very good. And it’s managed to pass the lips. Sometimes when you
73
generate these, it’s like a closed mouth with just a cigar put over. But that’s actually
74
done really, really well. I like the light on the tie. Really, really nice. The other
75
things I can do is like and dislike it so it can help with the learning. But there isn’t
76
anything like in painting. There isn’t anything like editing, removal tools. It’s way more
77
casual in its approach. Now, I was just on Grok 2 Mini. Let’s go Grok 2 Beta. And let
78
me it’s the most intelligent model and one coming out. Let me give it something a little
79
bit harder. Okay. Generate an image of Donald Trump smoking a cigar with his hands in the
80
air wearing gloves. Lots of details that it has to. Oh, and I’ve also done. Generate Den.
81
I don’t know where that came from. Okay, he’s definitely wearing gloves, smoking a cigar.
82
That looks a lot less real though. Like I mentioned just now. So there’s a cigar that
83
it just puts it against the lips. Let’s just regenerate and see if we can do a better job
84
there. Okay, now we’ve looked at this getting funny. So Donald Trump is wearing leather
85
gloves right here, smoking a cigar. Yeah, that’s it’s getting better and better. Now,
86
if I wanted to take this again, Donald Trump smoking a cigar, white gloves, black background,
87
dark, moody, intense. Let’s give it some descriptive words right here. Or bit typos there. Spelling
88
that’s okay. I should know. Yeah. And now we’ve definitely getting look how good that
89
is. Really nice. Let me just regenerate. I could play with AI tools forever regenerate
90
and just compare them side by side. Now it looks like he’s wearing woolen gloves, but
91
I like what’s happening here. So like I mentioned, don’t be using this to create anything derogatory.
92
And if you put that up, there may be some trouble, obviously, but it does do really
93
nice quality images. And I’ve got some ones right here. Let’s do it with I actually put
94
in because it has worked doing it 69 before I’m going to put an AR 69. That’s aspect ratio.
95
This is a syntax we would use with mid journey. So let’s see if that makes any difference.
96
But I really don’t think it will. We’re still going to get this more for three style format.
97
Yeah. And it is nice. Okay. Yeah, perfect. So that’s pretty much all I can show you with
98
grok. They will be updating this. And if we know Elon Musk, and he wants the X and
99
all the other the beta platform you go to for news is the number one trending news apps
100
in a lot of countries. And then to be able to generate with AI ask questions. He wants
101
it to be all in one platform. And he’s a smart guy. We have lots of backing and money. So
102
we can probably bet that this is going to improve over time drastically. And I’m going
103
to keep updating this course as grok is better and better. If you’re already an ex user,
104
a few bucks a month for premium wouldn’t be an issue. But if you’re not an ex user,
105
then it might be something you’d want to go with one of the other platforms.
106
So now we’ve done a lot of these tools, some way more skewed specifically towards image
107
generation for the more advanced user that we’re doing and some way more for the casual user.
108
Let’s go on and I’m going to go back into mid journey and show you some more advanced features
109
there right before we get into the end of this section where I’m going to be creating the images
110
for my own course project that we’ve been working on. We’ve already done some of those
111
images in the last section. I need to fill in more and make sure I’ve got a complete story.
— Ai Image APPs: Wizard —
1
So you’re probably going to want to know about apps on your phone. I’m going to show you
2
one. There are lots and lots and lots of different versions, but one that’s I think a little
3
more complete than others, Wizard AI. It’s available for Apple and Android. If I just
4
open this up, so $4.99 a week, not the cheapest. I’ve cancelled mine. I wanted to trial this
5
out and I’ve been using it for a while, just testing it out as you want to create stuff
6
on the go. I was flying and I was traveling a lot and create images on the go. Great for
7
that, but fairly pricey compared to other ones with your desktop and stuff. But there
8
are multiple plans you can get here. A yearly one, obviously slightly cheaper. Now the great
9
thing about Wizard AI is that it’s pretty much, let me show you here. If I go select
10
model, look what it has here. Dali, Sora AI video apparently, Mid Journey, Stable
11
Diffusion, Bing, Leonardo AI, Deviant and Adobe AI. So apparently it has lots of different
12
models to be using all in one. You’ve got right here and it’s probably going to increase,
13
but at the time of recording, there’s eight different platforms to be using here for your
14
image generation, which is great. Let’s just choose Mid Journey on here and let’s type
15
something in. Let’s do an image of a man and his dog. Very simple. Not a great, not
16
a great prompt. Now I’ll go through these. Let me show you advanced. I can do things
17
like move up and down the creativity, the strength and the CFG scale. So this is basically
18
how much it’s adhering obviously to my prompt. Before that we’ve got aspect ratio. I want
19
this in 69, HD, full HD or 4K. I really like that it has those options. Let’s apply those
20
settings, select my model. And then you’ve also got this, which is really cool. I can
21
instantly tell it rather than have to put it inside my prompt. I can tell it the style
22
when it’s realistic. I want this Lego. That’s quite fun to do. Pencil, digital art, and
23
then what’s more styles here below. So I’m going to go with realistic for this one. We’ll
24
play with a few and let’s click generate. OK, completed it. Let me have a little look
25
at some of these. A man and his dog. Pretty good. The annoying thing is I can’t click
26
to see these in full, which is quite annoying. The only way to do that is to download and
27
then open them up. So I will drive one. I guess that this is probably the most cinematic.
28
If I zoom in, you can see it is realistic, but is a little bit illustrative. The hair
29
is really nice. Good quality. The dogs also really nice. You find the animals often come
30
out really well because there isn’t you’re not studying it like you do a human’s face.
31
So perhaps it’s not that great. Actually, if I go to the edge of the hair, it gets a
32
little bit lost, but pretty nice. Pretty good image here. And the first thing I want to
33
do is compare that to an image of a man and his dog. I’m just going to change that to
34
let’s do it with stable diffusion for now. Let me play. OK, loading. Unlike we know when
35
we’ve we’ve used able diffusion and stuff, we have one image, not four. But I think if
36
I just check this out, I think that I’m going to download that again, have a look inside
37
my gallery. Definitely has a lot more. If I actually compare this directly, a lot more
38
detail gets washed right here on these areas. But if I go to that image, I’ve got texture
39
of skin still I can see the pores on his face. So I think that actually stable diffusion
40
did a better version of the image, although unlike inside midgen, we see we can upscale.
41
You can’t do that inside the app. So perhaps unfairly, we’re looking at these images for
42
midgen in the last one without them being upscaled. Now, of course, I want to play with
43
a man and his dog. I want to play with Lego right here. Let’s generate that. And this
44
is inside the stable diffusion model. OK, nice. A man and his dog. And I’m going to
45
download that to check it out. A man and his dog in Lego. I’m not sure these tiny pieces
46
work entirely for that, but definitely for the man it does. Something funny is going
47
on with his eyes, though, I think trying to contour him slightly, but definitely got the
48
image that you want. But you can go through and you can play with some other stuff like
49
Deviant Art, Adobe AI. And it’s great to have this on your phone if it’s not too much out
50
of your budget. It’s nice to have this all in one place and to be able to just generate
51
on the go. Sometimes I get ideas of things I just want to generate, even to store in
52
my files. I have my phone to be able to come back to and then I remind me and I use another
53
model for it eventually. Or I managed to, you know, I’ve got a spare 30 minutes in a
54
taxi or something and I generate and regenerate until I got a pretty safe image that I want
55
to use, download it, and then I’ll upload that into mid journey or something to use
56
it as style reference image reference when I’m creating on my laptop. So I needed to
57
show you a tool. There are lots out there. This is probably one of my favorites. It’s
58
all in one. None of them are heavily. There’s no like heavily creative, no inpainting, editing
59
and things like that to a great extent, like you get on desktop versions, obviously, because
60
computing power. But that will be something in the future and I will update it. So that
61
was an app. Let’s move on.
— Face-Swapping Ethics: Stay Creative, Stay Legal —
1
In the upcoming lectures now, I’m going to show you face swapping, where you can swap
2
out one face to another. Obviously, before we get into that, I have to have this little,
3
just a couple of minutes here, little talk with you. You could obviously then be manipulating
4
this to make a famous person in an image which could be derogatory for them, or you don’t
5
have permission, and the legal consequences and ethical consequences of that. Please swap
6
them out if you found a perfect person’s face that you want to use in an image, or perhaps
7
it’s your own face that you want to swap out, then absolutely do that. You can see images
8
like this that I’ve made, and it’s on site, where you take an image of yourself and place
9
that into using face swap onto another image. Absolutely fine. You own the rights to your
10
own image. I’m going to read this to you just quickly. Legal considerations. Consent
11
required. Using someone’s face without consent can lead to legal issues, especially in commercial
12
settings. If you’re doing this for a company or something. Right of publicity. Unauthorized
13
use of a celebrity’s likeness may infringe on their commercial rights, especially if
14
they’re famous, then their face is kind of their property in so much that it makes them
15
living. Data protection laws are under GDPR, that’s especially in Europe. Facial data is
16
protected as biometric data. Requiring consent to use. Now, of course, there are not just legal
17
considerations, but ethical guidelines too. So transparency. Clearly label AI generated
18
content to be avoided deception. If you’re uploading, and you’ve probably seen loads of
19
ones using celebrities even, but uploading AI content on YouTube or TikTok, most social
20
media platforms, there was a slider right there to say, yes, this is generated using
21
AI, please take that on, you’re not going to get you’re not going to get any less interaction
22
because you put that slider on. You know, it definitely doesn’t do that. Some people
23
are worried I’ve clicked this and now YouTube won’t show it to anyone. That’s not the case.
24
You just need to do that. Privacy respect, avoid unauthorized use of someone’s likeness,
25
especially in sensitive content. Don’t be using a politician celebrity’s face and showing
26
something untoward or something that they didn’t do at all. Content responsibility,
27
ensure face swapping is used responsibly and avoid harm or misinformation. We’re getting
28
into a very dangerous time where we’ve seen misinformation, especially if conflicts going
29
on around the world, and how that can be used. Laws are coming in all the time for this,
30
which could possibly be backdated. And you could be in trouble for this. Don’t be one
31
of those please. As a student of this school, we say do not use unauthorized use of somebody’s
32
face, especially in a derogatory way. There’s still freedom of speech if you’re using someone
33
and making something comical, lighthearted that couldn’t possibly offend the person or
34
whatever, but you don’t know that for a fact. So our stance on this in the school is don’t
35
do it. Best practice to obtain permission for commercial use, especially prioritized
36
privacy and stay informed on evolving regulations. I’ll try to update this as we move on in this
37
world. In the AI world, that is where laws are going to be coming more and more into
38
place, and they’re going to be regulated. They’re going to be acted upon more and more,
39
I think. Okay, so let me show you some face swapping.
— ReMaker: An Easy Tool for Face-Swapping —
1
Now, strangely enough, this little known AI tool,
2
but I think it’ll be around for a
3
while, is actually my favorite for face swapping.
4
It’s called Remaker.ai, and you get a
5
certain number of 25 or three, I think,
6
free credits per day, which is probably enough
7
for everyone, but you can also buy credits.
8
And for memory, I can actually click price
9
right here and check it out.
10
This will obviously be different depending where you
11
are in the world, but 530 credits for
12
$9.99 and $2.99 for $150, et
13
cetera.
14
I’ve maybe had 500 credits for three or
15
four months now.
16
I’m just not going to get through these.
17
It is loads and loads of credits for
18
not much money that don’t expire, which is
19
great.
20
So this is the tool that I like
21
to use.
22
Quite simply, you come to this page, Face
23
Swap Free, and you can upload the image.
24
This is the original image, and then this
25
is the target face that I want to
26
upload on here.
27
So I could just quickly grab an image
28
right now.
29
I could then upload that image here.
30
There she is, and I can upload an
31
image of myself.
32
So here’s a picture of me.
33
Not a great image for this example, but
34
just to show the limitations and what it
35
can do, I’m going to put my face
36
onto this woman here, which shouldn’t be great,
37
but let’s just see what it comes out
38
like.
39
Okay.
40
Hey, definitely got my nose here, this bulbous
41
bit here.
42
I’ve got some stubble on here.
43
Let me click and have a look.
44
You can see the quality isn’t great right
45
here.
46
That’s okay because we can actually upscale this
47
inside here.
48
I could also download it, put it into
49
any of the platforms we’ve been using and
50
upscale it, but I can now choose to
51
generate this for one credit, and it can
52
also do face restoration.
53
Let’s generate a credit here.
54
I’m going to download and have a look
55
at me.
56
Lovely.
57
Here’s an image of me.
58
Now, obviously, this is not perfect because of
59
the shape of face and size.
60
I’m going to give you a little bit
61
of tips here.
62
Now, when you’re creating, I need an image
63
of myself.
64
So, for example, I wanted to create something.
65
Let me show you.
66
If you’ve been on site, you might have
67
found an image like this of me on
68
the site on AIvideo.school, and there’s an
69
image somewhat of me, a handsome version of
70
me for sure, but this works and fits
71
because of the shape face that I’ve done.
72
So, you would need to, if you’re inside
73
Midjourney, be creating, for example, a man aged
74
38 looking at camera sat at his desk.
75
I’m going to show you what does and
76
doesn’t work here just through some really simple
77
things.
78
All right.
79
So, for example, this person right here, let
80
me download them.
81
So, I love this image.
82
This is what I want.
83
Download that, and if I select it to
84
now be my original image, and let’s swap
85
this.
86
Now, it definitely looks a bit like me.
87
I definitely got my face right here.
88
Let me zoom in.
89
Definitely got my nose, eyes, yeah, mouth, and
90
you want to stubble here, but that’s not
91
my shape face.
92
That doesn’t work quite so well because it’s
93
not my shape face, and everyone has a
94
different shaped head, face.
95
So, you really want it to match the
96
original one here.
97
Now, there’s an even easier way to do
98
this.
99
If I come back into here and I
100
upload the image, so here’s an image of
101
me, for example, if I’m using this one,
102
and I can use that image.
103
Do you remember, if we select here, person,
104
I can say, a man sits at his
105
desk looking at camera with this person, remember,
106
selected on here.
107
Let’s go okay.
108
Okay.
109
So, now it’s used my reference image.
110
Let me zoom in here.
111
It’s not going to look exactly like me.
112
That’s why we’re using Face Swapper, but you
113
can see that it’s using it and having
114
a very similar shape face.
115
So, now if I download this image, go
116
back and upload that one here, and now
117
we do a swap, and now it fits
118
my face so much better than those other
119
ones before.
120
Now, we’ve got a very similar image of
121
myself.
122
Might want to go in and tweak either
123
in Photoshop or something to get this line
124
slightly better here, but that could be me.
125
That’s swapped out right there with the right
126
shape face.
127
So, that’s a little hack that I really
128
want to put forward about the face shape.
129
Obviously, you could prompt and say longer face,
130
thinner face, slimmer face, but it’s so much
131
easier if you have the reference image that
132
you want to use first, but you want
133
to change the background, then use that inside
134
mid-journey of your as the reference using
135
that icon right there, and then you can
136
swap these out.
137
This is a very much underrated tool I’ve
138
never seen many people speak about, really like
139
Remaker AI.
— Akool: Face-Swapping —
1
Now, this tool is called Akul, the URL, akul.com.
2
And it’s got a few things, actually.
3
If I go back over to the main page that you’ll see when you land on here, there’s all kinds
4
of things, or there’s a referral program and stuff, there’s your history and some things
5
I’ve been doing.
6
But they have Face Swap, Live Swap, Talking Avatar, Video Editor inside here, and a streaming.
7
So this is good if you’re doing a product placement, you’re probably using adverts using
8
something a bit like this.
9
But we want to use Face Swap for this tool.
10
So I’ve already gone on and played with some things before.
11
And you get a certain amount of credits depending on your package that you’re on.
12
I could upgrade, I could show you here for $30 the number of credits that you get, and
13
then 79, 350, and then if you want loads, let’s talk.
14
You also get the free one, which gives you 25 images, so you can come on and play with
15
this all you want.
16
All you do is you come to the page here, you can upload, choose your file and upload it here.
17
And then you’ll be selecting what face you want to swap it with.
18
For example, here’s one I’ve done in my history here.
19
And then I can choose, I can either here, select, and I can choose what I want.
20
For example, this image of me that we’ve done, or something else that I’ve uploaded here.
21
Here’s Donald Trump for test so you guys can see how real it is because you guys know what
22
Donald Trump looks like, or Mr. Beast or someone.
23
Of course, I would not be uploading this, I would not be using some of that permission.
24
But for my own personal use, that’s fine.
25
Let’s generate and this is Trump yet definitely got a Trump esque look to his face yet bit there and here.
26
But I definitely think that the last tool we use remaker would have done a slightly better job.
27
A call definitely has more realistic.
28
If I look at this, that’s definitely realistic.
29
It doesn’t have a lot of the cutout blur thing, even though the shape face different, which
30
is the problem with remaker.
31
That’s why I showed you about getting the face shape absolutely correct.
32
This one definitely fits the shape face properly.
33
But because the face shape is not like Donald Trump’s, it doesn’t look exactly like him.
34
I wouldn’t look at that and definitely say that was Donald Trump.
35
But if you had more images and a better image, then definitely it could be my favorite tool.
36
But it comes out pretty high of people wanting to use it for face swap.
37
So I had to show you this.
38
You can also do things like use another image.
39
So if you’re creating an image, even just for yourself, like I could use this image.
40
Let’s swap the face here and I could swap this.
41
Let me someone that you might know on here.
42
Let’s do Jimmy Donaldson, Mr. Beast, and let’s have a little look.
43
OK, let’s check this out.
44
Yeah, definitely.
45
I can see Mr. Beast’s face in here still.
46
I’m not sure good as remaker, but but I have to show you every tool.
47
And this one’s been around for a while and I think still will be.
48
So it could be something if you want a more realistic, perhaps if you didn’t want the
49
exact person you’re going for, but a more realistic image. Absolutely.
50
Now, I want to get into the exciting part into the course project.
51
You’re going to see me making more and more of the images that I want to make for my final
52
project that we’re sending off to film festival.
53
If you want to see me live almost in real time, I’ll cut out the rubbish, boring bits.
54
But you want to see me generate all the images I need and how I’m doing it inside Midjourney.
55
That’s coming up.
— Course Project: Creating Images for Our Continued Project… —
1
So finally, towards the end of this section
2
now, I’m going to be doing the course
3
project.
4
If you’ve been following along, you’ll know exactly
5
where we are.
6
We got our idea, we got our script,
7
and then we started doing a style guide
8
for what things should look like.
9
Then we did an actual storyboard, so we
10
have a lot of our individual images already
11
created.
12
And if you remember, there were some gaps
13
in there.
14
So actually, I’m going to get that up
15
for you.
16
I always have these to reference whilst I’m
17
creating my images.
18
So there’s our storyboard right here, and here’s
19
my script.
20
So I’ve got the entire script on here,
21
just in case I get lost and I
22
need to remember what’s happening.
23
But I’m pretty sure I know exactly what’s
24
happening here.
25
I’m pretty familiar with the story, but keep
26
on standby just in case.
27
Now, if you remember the storyboard right here,
28
it goes from top across like this, and
29
then back down and across.
30
So come with me along here.
31
We enter into this big wide establishing shot,
32
we know it’s called, right here of the
33
USA Pearl Harbor, and then on the outskirts
34
of Hiroshima.
35
And then we come inside, and there’s going
36
to be a shot of the girl here.
37
I may need a wider one slightly when
38
we come to do this and put it
39
into video.
40
I don’t really know until we get it
41
into video.
42
And then there’s the shot of Amy here
43
in Japan.
44
Then we’ve got her coloring in.
45
I probably want another image of this with
46
her coloring in.
47
And then I’ve got the same here with
48
Amy as opposed to her looking at camera.
49
Then we’ve got her drawing something, and again,
50
drawing something.
51
And then what I’ve got here missing is
52
these shots of the girl looking up somewhere
53
and a father entering shot, or we hear
54
footsteps of a father entering shot.
55
I might have a father enter here, and
56
you might just get a father enter or
57
shadow or something here.
58
So he comes in, and he says a
59
few things.
60
We’ve decided that he’s going to say something
61
like, goodbye, I’m going into the city or
62
going into Pearl Harbor.
63
Draw a picture of me, okay, or draw
64
a picture of us, because I want that
65
to happen at the end when we know
66
there’s been explosions and the picture dropped to
67
the floor.
68
So he says the same thing to Amy
69
in Japan.
70
And then I need shots here, Amy back
71
to the table to color in, look at
72
the window and smile, see the outside scene
73
for later, show that Amy’s not alone.
74
So maybe in that father’s dialogue also, we’d
75
be like, be good for your mother.
76
So show the mother’s home.
77
Then I’m going to fade so that it
78
shows time passing.
79
That’s so that we know that the father
80
has enough time to have got into the
81
city, into the danger zone, if you like.
82
Music’s going to change.
83
And then I need to have shot here,
84
Amy’s face, look to the window, something’s wrong.
85
So I’ll have her look to the window.
86
Maybe I’ll twist and zoom to the window
87
and then Amy will enter.
88
I probably go through the window and see
89
the shots like this.
90
Come back to Amy here and then I
91
need an ending here.
92
So I may have the picture dropped to
93
the floor that it’s going to be of
94
those two that they’ve been drawing to show
95
that that’s kind of the climax, the or
96
the ending, if you like, that obviously something
97
sad’s probably happened.
98
So a picture of them.
99
So I need to fill in the gaps
100
here and change some bits.
101
And then by the end of this, if
102
you follow along me almost in real time,
103
you don’t have to.
104
You can skip forward.
105
I’ll cut out all the boring bits as
106
I’m using mid journey, but there’ll be some
107
bits in here where things go wrong or
108
I can’t quite get something right.
109
And you’ll see me work time and time
110
again to get exactly what I want for
111
this.
112
I may also be using occasionally a little
113
bit of Adobe Firefly or Photoshop to do
114
some in painting if mid journey is really
115
struggling, but it’s going to be predominantly mid
116
journey.
117
So let me go back in.
118
I’m going to the first thing I’m going
119
to do is I have this.
120
I’m going to just create a wider shot
121
of this and a wider shot of this
122
just to make sure I have them.
123
And then I want to create this shot
124
and this shot of them coloring in.
125
So first things first, here’s the image of
126
Amy from the back.
127
Amy in the USA, the ships here.
128
I just want to make sure that I’ve
129
got a wider shot of this just to
130
make sure when I’m zooming in, I have
131
enough of a zoom.
132
So if I just put that like here,
133
this should be wide enough.
134
Let me give myself a little bit of
135
extra room to play with something like that.
136
Let’s submit that.
137
And then whilst I’m here, I can find
138
the image that we did of Amy in
139
Japan also, which was this one.
140
And then I’m just going to do the
141
same thing just to make sure we have
142
that.
143
I quite like the idea of this being
144
slightly from behind like there.
145
And let’s submit.
146
Let’s go into our creations.
147
Now you’ve got this shot, this shot, this
148
shot, and this shot.
149
Let me keep looking.
150
Okay, I quite like this one.
151
I’ll bring back on my storyboard.
152
Let’s have a green sofa on this shot
153
we have Amy.
154
So just for continuity, it probably wouldn’t be
155
noticed by the public when they’re watching, but
156
I would notice.
157
So I think if I have something like
158
this, there’s already a bit of a sofa
159
there on the right hand side, you see.
160
So I’m going to just go into editor
161
right there.
162
And let’s just do this.
163
Let’s do that.
164
And I’m going to say light green color
165
1940s sofa.
166
I’m going to submit that.
167
I’m going to take away the word light
168
just in case.
169
And I’m going to submit that.
170
And then I’m also going to do one
171
about the word color.
172
Let’s submit.
173
And let’s see what we get.
174
No, no, no, no, no, no, and no,
175
full house, no.
176
So let me continue, let me play with
177
this and I’ll come back when I managed
178
to get what it is that I want.
179
So here’s an example where I have done
180
so many iterations, I mean, so, so many,
181
and it hasn’t provided me with a sofa
182
that I want yet.
183
So sometimes when this happens, I take something
184
like this, I upscale it, and then I’m
185
going to put it into Photoshop, or I
186
could be putting it into Adobe Firefly, I’m
187
going to put into Photoshop and we’ll see
188
if they can generate me one with generative
189
fill.
190
And before I do that, if I keep
191
scrolling down, we know we generated some more
192
shots of Amy in Japan.
193
So okay, that’s quite nice.
194
You can like see it’s almost like out
195
of a window and seeing what’s there.
196
All right, let’s vary that subtle, let’s vary
197
that subtle again, while it’s going, I’m going
198
to download Amy’s image.
199
And now I’m right inside here.
200
So if I just, I’m going to try
201
a few things here.
202
Let me just select this shape so it
203
knows where it can use something like this.
204
Yeah, something like that.
205
Let’s try that one at first.
206
Let’s go generative fill.
207
I want a green sofa, and I haven’t
208
given it any details about time period or
209
whatever, but let’s just see if it does
210
anything.
211
Okay, it’s definitely given me some green sofas
212
here like this, the wrong kind of style,
213
but it understands what I mean.
214
How much better was that than just using
215
straight inside mid journey?
216
So I can say 1940s light green sofa.
217
Let’s generate.
218
Okay, not that one.
219
Oh, that’s pretty close like this one.
220
Okay, that’s pretty good.
221
Let me generate again.
222
Okay, getting close.
223
I think this is pretty much it.
224
This is almost identical to what I want.
225
It’s definitely enough.
226
Yeah, this is really good.
227
Okay, I’m going to save this.
228
And then that’s my wide shot image.
229
All right, let’s see how Amy in Japan’s
230
doing.
231
Let’s check out her.
232
So we still got this strange black framing
233
in the front.
234
But I do like that it’s got houses
235
outside.
236
Yeah, okay, let’s just play with one of
237
these.
238
I quite like this one.
239
And let me just go and try and
240
remove the black frame.
241
Just the one going across, I think.
242
Let’s just try a few things.
243
Let’s say remove, remove line, remove black.
244
And let’s just do Japanese house.
245
Okay, this seems to have worked with every
246
single iteration it looks like.
247
Let me see which one I like the
248
best.
249
I’ve got this one, this.
250
Okay, I think I like this one the
251
most as the wide shot.
252
I’m going to click upscale right here.
253
I don’t worry too much about these black
254
bars right now.
255
I could go into editor and I could
256
remove those if I wanted to.
257
But it probably would design, if I upload
258
this in video, it may remove them anyway.
259
But let’s go remove black.
260
Okay, and now if I am here, I’ve
261
either got, yeah, that’s got a strange bit
262
there, like you can see through, but I
263
like this little teapot detail.
264
Let me keep going through this one.
265
I like this, let me upscale that.
266
And now while I’m looking at my storyboard,
267
the next one, while that’s just upscaling right
268
there, up resing, these are my next two
269
shots.
270
I know they’re looking at camera and you
271
shouldn’t, but there’s something to do with these
272
girls looking straight into camera that’s really quite
273
cool.
274
I’m just gonna write a note to myself,
275
I think, because you could have something really
276
cool right there where you have, I don’t
277
know, like a Tarantino style title that comes
278
up or something with it.
279
Let me just quite crudely just write cool
280
title as look at cam.
281
And that will just remind me if I
282
just make that smaller.
283
Okay, I’ll just leave that there and it
284
will remind me.
285
So I do want a shot pretty much
286
identical to this though.
287
Perhaps I want this one slightly wider at
288
first and this one slightly wider at first
289
if I’m zooming in and do a title
290
or something, or maybe I won’t and I’ll
291
just have her staring and then she starts
292
coloring in.
293
So let’s get these girls both looking down
294
and coloring in for my next shot.
295
This one has finished upscaling, perfect.
296
Really, really nice, download it.
297
So here’s the image that we had of
298
Amy staring straight at camera, nice.
299
Let me do edit and let me just
300
do this.
301
Okay, and I’m going to say, girl looking
302
down at drawing, which might be too much
303
detail for it.
304
So let’s go girl looking down and let’s
305
just say looking down and let’s see if
306
any of those work.
307
Okay, now, all right, so we now know
308
from our lessons before that if I do
309
that and I didn’t have any style or
310
image reference that it did it perfect, the
311
girl looking down, but it’s not our same
312
girl and we want consistency throughout this.
313
Obviously, of course we do.
314
Now, I might not need to generate this
315
image when we’re in runway next and we
316
are making our video, I can actually tell
317
it in the prompt.
318
I could give it that first image that
319
we have and say girl looks at camera
320
and then girl looks down coloring and we
321
don’t need this at all, but I want
322
to create it just to make sure.
323
I will now instead go back into the
324
image that we had and I will instead
325
go image style and I will say girl
326
looks down, looking down.
327
Let’s do that.
328
Let’s hit that one.
329
I also want to go and put in
330
that girl right there and the style and
331
the image and say looking down.
332
So I’ve got all three right here.
333
So it should be the exact same image.
334
Let’s see how Mid Journey coped with it.
335
So this one was just having the style
336
and the image, looking down, not the same
337
image at all.
338
This one, I want the style image and
339
the girl.
340
This also is struggling with this.
341
So I mean, I know for a fact
342
that I will be able to do it
343
inside the video editor, but it’d be nice
344
to have it here inside the image as
345
a backup.
346
So if I just look at some of
347
these images right here, this is what I
348
would do.
349
I would grab this and go edit and
350
she has got bob hair.
351
Hasn’t seen, I remember we did that for
352
hair.
353
So 1940s bob hair, or I can just
354
do bob hair style.
355
Let’s see what I’ve got here.
356
And we’re getting closer to it.
357
You can see this is the absolute style
358
that we’d have, probably like this one or
359
this one, I think, or something, but we
360
don’t really need to do that because I’ll
361
show you just quickly.
362
When I’m in runway and we go to
363
generative video, I’ll be able to then drop
364
in that image that we want to have
365
Amy looking down on.
366
So now we have the image here.
367
I can give it a prompt.
368
I can also just go into camera.
369
I can tell it, I want it to
370
zoom in just slightly.
371
It can tell you to where, zoom into
372
here.
373
And I can also give it a prompt,
374
looks down coloring.
375
Give it that, it may be confused by
376
coloring.
377
So I might have to generate again, but
378
let’s just see what happens there.
379
So you can see that just inside runway,
380
inside the video editor, I can do this
381
and the girl looks down and is coloring
382
in.
383
You see, that was done inside there.
384
So I don’t need to, this is the
385
life of an AI video creator.
386
I over-generate just to make sure I’ve
387
got every image and I can tell it
388
to start on one image and finish on
389
another that I’ve generated.
390
But sometimes it’s just not needed and you
391
have to cut your losses.
392
So I’ve tried really hard to get this
393
look down.
394
I could keep playing with it and I
395
would definitely get it.
396
But sometimes if you’ve already got the image
397
that you need, you know you don’t need
398
to.
399
If that doesn’t work, then I would come
400
back in and I’d be creating more and
401
more and more.
402
So the same thing with, if I go
403
back to our storyboard, same thing with this
404
shot.
405
I’m gonna just have the girl coloring in.
406
I may need to add in, oh no,
407
there’s something here.
408
I just wanna have it, I probably have
409
it hold.
410
So she’s looking first at screen and then
411
to start coloring.
412
Pretty cool shot, really cool.
413
All right, so next I wanted to create,
414
it’s this shot again, but I need her
415
father to walking to shot.
416
So I’ll just jump back into Photoshop here
417
and they do a better job.
418
The Adobe Suites are really good and you
419
saw when we did Adobe Firefly how I
420
would definitely go across the two platforms right
421
now.
422
Mid Journey does do great stuff, with a
423
lot of in-painting for when you’re doing
424
more simple in-painting and tasks.
425
But I didn’t generate this image.
426
If I generated this image straight away in
427
Mid Journey and said with a man walking
428
in, then we could get that.
429
But now I already have the image, I’m
430
using in-painting inside Photoshop right here.
431
Or you could do it inside, like I
432
said, Adobe Firefly or something.
433
So this is my image of her dad
434
walking in right now.
435
If you missed this, go back and watch
436
the Adobe Firefly and Photoshop tutorial and it’ll
437
show you that.
438
I’m gonna do the same with Amy in
439
Japan also.
440
And then once again, in exactly the same
441
way, I’ve got Amy’s dad walking in right
442
here.
443
I had to make the shirt white and
444
things as opposed to other colors that he
445
came up with just by using generative field,
446
just like the tutorial about five tutorials ago
447
for Adobe Firefly and Photoshop.
448
So I have him in the background here
449
walking in and that’s the shot I’m gonna
450
use for that.
451
So onto the next shot.
452
So we need to go from the shot
453
where he walks in onto this shot right
454
there.
455
So all I’m gonna do is for when
456
we have this shot where we had our
457
dad walking in, I’m gonna tell it in
458
runway, I’m gonna say girl walks right and
459
walks out of a shot.
460
I don’t need to generate an image for
461
that.
462
And then also the same thing for this,
463
the girl is gonna stand up and go.
464
I need to change this to a kimono
465
to match this one.
466
She’s in like a pink kimono.
467
So let’s change that actually.
468
Here’s the image right here, Technicolor 1940s.
469
Do you remember doing this a little while
470
ago now in the course?
471
Let me change this to pink kimono and
472
like a bow at the back.
473
Let’s change that right there.
474
Pink kimono, yellow bow.
475
Let’s see what happens if I do that.
476
Okay, let’s look at some of these options
477
that it’s given us.
478
No, no.
479
Yes, potentially.
480
Yes, potentially.
481
That one’s probably the best one, I think.
482
And then if I want to just change
483
that, I’m just gonna remove that there and
484
just say yellow bow.
485
And I’m also just gonna go pink kimono
486
in case I’m gonna add the bow after
487
if it’s not doing that.
488
So look at these options for yellow bow.
489
No, none of these are really suitable.
490
Let’s have a look here.
491
Yes, yes, yes.
492
Okay, that’s good.
493
I’m gonna go edit and do this.
494
And now I’m gonna say yellow bow a
495
couple of times.
496
Okay, lovely jubbly.
497
I think this one, yeah, that’s really, really
498
nice.
499
I actually really like this image.
500
So let’s up res that and then that’s
501
my image for that.
502
Done and complete.
503
Let me work out what I’ve gotta do
504
next now.
505
So she’s gonna sit back down at the
506
table.
507
I don’t need to do that.
508
I can have that done inside the video
509
editor.
510
Time passes, she’s cloning in.
511
What I want to get is a closeup
512
of her pictures and then have it where
513
it’s a picture of her and her dad.
514
And that’s what I want her to say.
515
Draw a picture of me and you, you
516
and I.
517
So let’s do that one.
518
Let me just see if that’s high res
519
finished.
520
Download it, there.
521
Very pretty image.
522
The lighting in here is really nice.
523
Okay, download that.
524
Now I wanna go back to this image
525
over the shoulder of Amy in Japan.
526
And I’m gonna play with this.
527
I’m gonna try it first inside here.
528
Where we get into the nitty gritty, you’re
529
gonna see me do a lot of in
530
-painting with other tools, but you will be
531
able to do it inside of Midjourney.
532
It just might take more and more time
533
and iterations to try this.
534
So let’s go a child’s drawing of a
535
girl and her dad.
536
Submit that a few times.
537
Okay, and here are the different iterations that
538
I’ve got.
539
Let’s compare that.
540
You could keep going and going and it
541
would normally take me 20 times plus of
542
iterations to try and get this.
543
I’m gonna also upscale this and I’m gonna
544
take it into Photoshop now.
545
And let’s see if we can get it
546
to do it quicker in here.
547
Let’s trial this.
548
If I grab this area right there that
549
I want it to draw on, let’s go
550
like that.
551
And let’s tell it a child’s drawing of
552
a girl and her dad.
553
Let’s start with that.
554
Okay, so we’re definitely getting pictures here of
555
a girl and a dad.
556
That’s great.
557
Let me say a pencil drawing of a
558
girl and her dad, a child’s pencil drawing
559
of a girl and her dad.
560
Okay, so I quite like one of these.
561
These are quite a nice image right there.
562
I’m actually gonna generate another one.
563
I like to have quite a lot of
564
options, but these are exactly what I need.
565
I’m gonna do this both for this shot
566
and Amy in the USA as well.
567
This is a nice family one.
568
I’m gonna have this one.
569
And here’s the photo I’m using, the picture
570
that I’ve generated for Amy.
571
What I’ll probably do, because I need to
572
show time passing, is I’ll have it blank
573
and I’ll fade out.
574
Then I’ll fade back in to this original
575
image and that will show time passing, which
576
we need to have right here.
577
Now what I need is just Amy looking
578
over at the window to the left.
579
Actually, I don’t need to do that because
580
I’m gonna use both these images right here.
581
I’m gonna use this one and this one
582
as they’re coloring in and looking down.
583
And I’m gonna have them shockingly look over
584
to the left-hand side quickly, which we
585
can do inside Runway, as I just showed
586
you just now.
587
We can do it inside here.
588
So that’s all the images that I need
589
to tell my story.
590
What I’m going to do next is actually
591
put these down in the edit.
592
You’re gonna see what I do first is
593
I lay these down to see if it’s
594
telling a story and what’s needed actually inside
595
my editing software before I’ve even made it
596
into a video.
597
So I’m gonna lay those down next and
598
you’ll know if you’re missing something or not.
599
Maybe I am, but this is how I
600
come across my problems.
601
Now we’ve generated those images.
602
You saw me do it with Storyboard and
603
you’ve seen me teach you all about the
604
remixing and using style and prompts and things.
605
Now we’ve done that.
606
You see me do a lot of in
607
-painting, either using this tool or one of
608
the Adobe tools.
609
You can do it all in here mid
610
-journey.
611
It just might take more, more iterations than
612
if you were to use in-painting generative
613
fill rather inside one of the Adobe products
614
or similar.
615
So I’m gonna put this down in the
616
edit now.
— Edit Begins Now: Lay Down Your Still Images to See the Story —
1
Now, the next part, once you’ve got all
2
your images, is actually really important.
3
Well, not important, it’s not crucial, but I’ll
4
show you why I do it and I’ve
5
just realised a mistake that I made and
6
this is the only way you can really
7
get to see it and understand if what
8
you have is working or whether you’re missing
9
something, which I definitely was and had to
10
go back in and fix.
11
So, this is Premiere Pro, this is the
12
editing software that I use to edit, but
13
this could be in CapCut, it could be
14
in Filmora, it could be in any tool
15
that you’re using as long as you edit.
16
It doesn’t matter.
17
All I do is drag in all of
18
my stills, so hopefully you’ve been organising this
19
as I mentioned.
20
I have a folder that looks something like
21
this, it’s all my stills I’ve been creating.
22
I drop them in and then I order
23
them and then I use this.
24
It doesn’t matter about the length of these
25
clips right now, they’re all about four or
26
five seconds each.
27
It doesn’t matter, this isn’t going to be
28
the final of anything, we haven’t even made
29
it into a video yet, it’s just to
30
lay it down so I can see it.
31
So I’m just going to make this bigger
32
for the sake of this tutorial, going to
33
move that there and move that across so
34
you can see.
35
So what I do is I lay down
36
my shots and I start talking to myself,
37
which makes me look crazy, but I still
38
talk to myself and I start understanding whether
39
or not this is working.
40
And the order of these shots might change
41
and I’ve ordered them a couple of times.
42
So I have this initial establishing shot here,
43
again we’re going to have titles aren’t we,
44
but establishing shot of Pearl Harbour, that would
45
zoom in and then a shot of Amy
46
in a house from behind, that would zoom
47
in.
48
Then I’ve got a shot of this city
49
in Japan, that would zoom in and also
50
of Amy here, that would zoom in.
51
Then we cut back to the shot of
52
Amy, look at the camera, might have another
53
title and then she’ll start colouring in and
54
then we’ll go over her shoulder and we’ll
55
look at what she’s colouring in.
56
I’ll do the same with Amy in Japan,
57
she will be there, we’ll probably slow zoom
58
in and she’ll begin to draw and then
59
we will also see that, I’ll probably change
60
this image, eventually we’ll look at what it
61
looks like when it comes out on the
62
video, but I might change her actual drawing.
63
And then we’re back to Amy in the
64
US and her dad walks in, I’d have
65
some noise here of footsteps coming through.
66
Her dad walks in and then we also
67
have Amy in Japan, her dad walks in
68
and we’ll have Amy in America, it’ll say
69
something like, hey how you doing, don’t miss
70
me at work too much today.
71
And pretty much the same thing with Amy
72
in Japan, don’t miss me at work, I
73
won’t be long, I won’t be late.
74
And then it’ll say, why don’t you draw
75
me a picture of you and I and
76
mum, us lot together.
77
And again, same thing, draw me a picture
78
of you and I and mum, okay, and
79
I’ll see you after work.
80
Then she goes back down and sits down
81
to her table and she starts colouring in,
82
this’ll be a blank page and we zoom
83
out of Amy doing that.
84
And then I’m gonna do the same thing
85
with Amy in Japan, she sits back down
86
at her table, starts drawing in, zoom out
87
of there and we fade, time passes.
88
We come back to the drawings now completely
89
looking like this, here’s one Amy and she
90
hears a noise, so we come in on
91
here and we hear like an explosion or
92
something.
93
She looks up, looks to the window, shocked,
94
runs over to the window in the distance
95
there and looks out.
96
And we’re gonna cut back to, oh no,
97
that’s skipped forward, so she looks out of
98
the window.
99
Then we’re gonna come into the other Amy’s
100
picture, here’s an explosion, looks over at the
101
window, what was that, goes over to the
102
window and then we cut back to, there
103
might be a fade out and here’s some
104
noises, back to Amy, she’s gonna go over
105
her shoulder and look out onto this explosion
106
that’s happening and a realisation that’s her dad
107
who’s gone to work over there.
108
And the same thing with this Amy, we
109
go over her shoulder out the window and
110
we see the explosions and we see Amy’s
111
shocked face.
112
Now I realised that I was missing some
113
bits here based on my storyboard that I
114
wanted and I was missing a few shots,
115
I needed, they have this picture, this piece
116
of paper they’ve been drawing, so I need
117
a couple of other shots here and I
118
have them here, this is what I was
119
just working on, didn’t take me long.
120
I needed shots of paper on the floor
121
and shots of them holding paper and it
122
falling to the floor.
123
So I end up having these shots right
124
here, goes from there to Amy and then
125
I’ll get the video editor to drop that
126
and the other Amy to drop that, it’ll
127
fall to the floor and we’ll see their
128
childhood photo, pictures they’ve been drawing of their
129
family and it’ll fade and we’ll have some
130
kind of title, some kind of, I’ll generate
131
something, chat GPT, something nice that summarises it,
132
like not, it’s not only soldiers who are
133
losing war or something that’ll come out here
134
and that’ll tell the entire message.
135
So that works as a layout.
136
When I animate these, if something is not
137
working, I may go one step back and
138
go back to the drawing board on some
139
imagery, probably going to be some tweaking with
140
regards these pictures and changing these probably in
141
Photoshop, in generative fill, but that’s the only
142
way I realised there was something missing.
143
So do lay these down in your shot,
144
just plonk them on, timing or anything doesn’t
145
matter, just drag through and speak to yourself,
146
get your script back up if you need
147
to and see if it matches and just
148
see if it works.
149
Does it work?
150
Is the shots okay?
151
Were they facing the wrong way?
152
Like I did have one thing like this
153
and then this shot or another one like
154
it was facing the other way and I
155
flipped it because it made more sense.
156
Little things like that you can only do
157
when you visualise it.
158
So put it down in the edit, it’s
159
kind of important, otherwise you’re going to go
160
back too many times once you start making
161
these into videos, that takes a long time
162
and that’s what we’re going to do next.
163
So let’s get over into that section.
— Upscaling AI Visuals: What You Need to Know —
1
So, upscaling or upresing, just making your images
2
better, higher quality is really important.
3
You’ll see me do this throughout the course
4
when I have images inside my AI image
5
generation tool.
6
Whichever tool you decide, there’s quite often an
7
upscaling option available on each one and you
8
should always, always take it and I’m going
9
to show you exactly why and some different
10
ways you can do this.
11
There are even some independent tools.
12
So, first, I’m going to do this live
13
with you and you can just see the
14
comparison and when we put it into an
15
AI video model, the quality difference that we
16
get.
17
Let’s test this first hand.
18
So let’s go.
19
An old man staring at camera, wrinkles, dramatic,
20
intense, blur, background.
21
Okay, let’s have a look at that.
22
Let’s also do a drone shot flying above
23
London City.
24
Just a simple, simple prompts for the sake
25
of this tutorial.
26
What I’m going to do is take these
27
images and then I’m going to upres them
28
and we’re going to put them into here
29
and see what they come out like in
30
video and reasons why you should be upresing
31
and how.
32
So, you could be doing this inside, but
33
depending on your need and what it’s for,
34
if you’re blowing this up 4K and it’s
35
going to cinema, then you might want to
36
use an independent tool like I’m going to
37
show you.
38
There’s a whole section on this later, upscaling,
39
and I can show you that.
40
I’ll give you a brief overview now, but
41
there is a section later in the course.
42
If you’re just making it for YouTube, just
43
making it for online, then still upscale to
44
get a better quality or some people don’t
45
even do that, but I definitely would.
46
So you can use something like an external
47
tool like Topaz, one of the most famous
48
ones, but there is a price to it.
49
Look, $299.
50
It’s not cheap, but if you’re doing this
51
and it’s a full-time thing and making
52
movies, then you might want to use this.
53
And I’ll just give you a brief example.
54
If I keep scrolling down their page, you
55
can get an example.
56
Let’s compare these now and you can see
57
the difference right here between one that hasn’t
58
been upscaled and one that has.
59
They claim it can really sharpen your images
60
to be ultra-realistic, but like I said,
61
it all depends on what it is that
62
you’re using this for.
63
Think about your end result, where it’s going,
64
your product, and backward engineer to the decision,
65
do I need this or not?
66
So I like this one.
67
I’m going to download that.
68
That’s my first one, are not upscaled, and
69
then I’m also going to upscale this subtle.
70
Okay, and here is my complete one here,
71
upscaled.
72
So let me just download that.
73
I can actually compare these two images side
74
by side.
75
This is the upscale version.
76
This is not.
77
Let’s just zoom in right here.
78
Let’s zoom in on this.
79
And yet we can definitely see, look at
80
this, pixelated here and a lot clearer.
81
The light here is blurred and here it
82
isn’t.
83
So it definitely makes a difference to upscale.
84
Now does that make a difference when you
85
make video?
86
So I wanted to compare it to a
87
landscape shot.
88
It’s actually given me a drone in shot.
89
That’s fine.
90
Let’s just roll with this.
91
So I’m going to download this one.
92
I’ll also upscale that.
93
And this has now been upscaled so I
94
can download that one.
95
Once again, let’s compare these side by side.
96
This is not upscaled.
97
This is upscaled.
98
We might be able to see it more
99
because we’ve got stuff in the distance here.
100
And then if I unfairly zoom that much
101
in, you’re not going to see too much
102
difference when we go there.
103
I can definitely see this has got more.
104
Let’s go to out a couple and I
105
can see that this building right here.
106
This looks like I need to zoom in
107
one more.
108
Yeah.
109
Look how less detail there is here compared
110
to here.
111
I can see these quite clearly coming out.
112
And if I actually try and get the
113
information on this, I can see that it
114
is 1.6 megabytes and the size is
115
1456.
116
Let’s compare that.
117
If I go with this, now we’re 6
118
.7 megabytes because we’re 2912 across.
119
So it is actually a bigger, more quality
120
image.
121
Let’s compare that in Runway because I want
122
to see if these, when you’re creating the
123
video, that’s what the whole purpose of this
124
is for, whether it makes a difference or
125
not.
126
So let’s do a little test with, here’s
127
our not upscaled man staring at camera.
128
I’m not going to give any prompts for
129
this.
130
So we’re probably going to get a slightly
131
different result, but we should be able to
132
see the quality still side by side.
133
Great.
134
Let’s generate that first one and generate the
135
second one to compare the upres version.
136
Same again for our city shots.
137
This is the unupres version.
138
Let’s generate that and our upscale version for
139
comparison.
140
Now these two have generated already, so I
141
can play these on here and I can
142
see that they both zoom in, which is
143
quite nice.
144
But it’s hard for us to see such
145
a small image, well, what it’s like.
146
So let’s zoom in.
147
Let’s download these and let’s see it at
148
large scale.
149
Let me just play these side by side
150
and see if you see any difference.
151
It’s hard to know because they zoom in
152
so close here.
153
So you’re bound to get blurring anyway.
154
And this one doesn’t so much.
155
If I look at the file size, they’re
156
actually both pretty similar as far as file
157
size goes.
158
Let’s zoom into these.
159
Okay.
160
Here’s the image that I’ve got finalized.
161
Let’s have a look at that.
162
If I go into here, but come back
163
slightly, it’s about here, isn’t it?
164
So remember this, got some blurring here, but
165
that’s just deep focus.
166
The quality of the lines right there.
167
And then compare that to this one right
168
here.
169
Yeah, I would say that the quality of
170
these lines is slightly deeper, but there’s not
171
that much in it.
172
And that’s because inside ML or wherever it
173
is you’re doing it inside Runway, you’re going
174
to have a certain amount of upscaling happen
175
as they create your video and they’re going
176
to manipulate the image just slightly so it
177
doesn’t make too much difference.
178
Now you can post having your video done,
179
use another tool to up-res, to upscale
180
this.
181
And we talk about that later in the
182
course where it perhaps will make a difference.
183
Let’s compare these two to get our final
184
thoughts.
185
These are finalized.
186
There’s a drone hovering over London that has
187
three, four gherkins, we call them.
188
The gherkin building over here, which is funny,
189
I only just noticed that, not that it
190
matters for the sake of this tutorial and
191
a very similar shot there.
192
So I’ve downloaded these.
193
Let’s take a look.
194
Okay.
195
Not upscaled and upscaled side by side.
196
Not too much of a difference, but oh,
197
it’s slightly maybe in lighting here.
198
Okay.
199
Let’s take a little look at both of
200
these.
201
If I zoom in for this one right
202
here, let me look.
203
Okay.
204
So this quality right here, we get a
205
bit murky and lost and the lines aren’t
206
that sharp.
207
Let me take a look at this.
208
Perhaps slightly still murky, but perhaps slightly sharper
209
there side by side.
210
This one is two megabytes bigger than this
211
one, not much in it at all.
212
So I think my conclusion with up-resing
213
is definitely do up-res because you might
214
as well have more information that you’re putting
215
into your video generator, which for us is
216
going to be runway predominantly.
217
You might as well give it more information
218
because it has more that it can do
219
with it.
220
It will slightly skew and upscale anyway, so
221
it’s not the end of the world.
222
But when that happens, if you’re looking to
223
be professional and getting to the industry and
224
make some great things, then of course we
225
can talk about and we talk about later
226
there’s a whole section, tools like Topaz, where
227
you can actually up-res imagery.
228
Like we’ve said, if I show you some
229
other comparisons where we’ve actually said, look, when
230
you get this blurring and this slight pixelation
231
and we just sharpen it a little bit,
232
that’s the kind of thing that Topaz can
233
do, which is really exciting how realistic this
234
looks in comparison, like I actually shot this
235
on a nice camera.
236
Anyway, that’s my two cents on upscaling.
237
Let’s continue with the course.
— Task: Create and Refine Your AI Images —
1
Now, of course, I have a task for you in this section.
2
If you’re following along and going along with exactly what we’ve been doing, creating
3
your own project, then please go ahead, follow your storyboard.
4
So use the storyboard we developed in the last section as your roadmap, ensuring that
5
each image aligns with the visual flow and story of your project.
6
Don’t suddenly have one image and then you have another one and the color is completely different.
7
It’s sepia and yellow tone in one and blue in the other, and the character looks slightly off.
8
Now you have all these tools I’ve shown you, especially in mid-journey, follow along and
9
start filling in the gaps now from your storyboard to have all the images that you need to tell a story.
10
Of course, refine these images, use those tools we discussed, make sure every little detail’s right.
11
You haven’t got six fingers in one, she hasn’t got a pink bow in one shot and a yellow in another.
12
You can go in and just make sure it’s perfect.
13
When you get these images right and you spend a bit of time on them, it saves you so much
14
time in the future, I promised you.
15
Aim for perfection.
16
These videos, these images, sorry, are everything that we’re going to give the information for
17
the video in the next section.
18
These images are more important than the video itself for an AI video.
19
Believe it or not.
20
And then you’re going to save and organize.
21
Like I showed, make sure you’ve got these all in the right place.
22
I hate it when you have to go through your desktop, your downloads, and find where things are.
23
Keep organized like a proper little production company that we are when we’re making this.
24
The outcome, of course, with these high-quality images complete, you’ll have a strong visual
25
foundation to transform all of these into dynamic animations, setting up your project
26
for success in the next stage.
27
And that’s exactly what we’re doing next.
— LORAS: Getting Character Consistency (Update Lesson) —
1
Now this is a little update lecture that
2
I’m adding to the course and it’s about
3
LORAs.
4
Those of you getting familiar with AI may
5
have heard this term thrown around LORA, L
6
-O-R-A.
7
What does it mean?
8
It means low-rank, L-O for low,
9
low-rank adaptation.
10
Now we don’t need to get too in
11
-depth about what that means exactly but think
12
about where you’re giving an AI model lots
13
of information in images to be able to
14
create a character, a more consistent character.
15
So if you want to create consistent characters
16
we’ve seen before and we’ve used runway and
17
then we’ve used things like face swapping to
18
get those better but sometimes when we do
19
face swapping we sometimes see that the face
20
is a little bit soft or whatever.
21
That’s a very quick way to do it.
22
There is a more let’s say professional way
23
to be doing it and that’s creating something
24
called a LORA.
25
So I’m going to show you how to
26
do that.
27
There are lots of places you can do
28
this, different tools and things.
29
I’m going to show you probably one of
30
the most straightforward because sometimes in some models
31
it can get quite confusing and I don’t
32
want to over confuse you.
33
AI shouldn’t be confusing with these desktop platforms
34
that we’re using.
35
So let’s start with this.
36
I’m coming over to a place called foul
37
.ai. You can go there and you come
38
to a page something like this.
39
You’ll need to log in with something called
40
github.
41
That’s fine just create an account with github
42
free.
43
Use your email address and then sign in
44
and you’ll be confronted with a page that
45
looks something like this.
46
So what I want you to do is
47
find this page right here but because I’ve
48
used it before it’s going to be here
49
for me.
50
So what I want you to do is
51
come over to, let’s go to explore right
52
here and then let’s go to the search
53
model.
54
Let’s type in LORA.
55
Let’s just search that right there.
56
If I scroll down and see under the
57
training section you can also find it right
58
here under training.
59
There’s one called Flux LORA fast training.
60
This may change or there may be an
61
image there or something.
62
So look for the one that says Flux
63
LORA fast training and you’re going to come
64
to a page that looks just like this.
65
You won’t see training history on here.
66
I’ve already done this so I’m going to
67
show you how I did it but you
68
won’t see this.
69
This will be blank.
70
You’ll say there’s no history and you come
71
to a page that looks like this.
72
Now what we do basically is we upload
73
lots of images of someone.
74
So someone you either have the rights to
75
lots of their images to use.
76
We’ve talked about this before in ethics.
77
Lots of different angles of that person left
78
right up down above below.
79
As much information as you can give it.
80
So I’m going to upload an image of
81
myself.
82
Many different images that I have.
83
We’ll upload them here and basically a LORA
84
will be created.
85
Now you can actually transfer this LORA and
86
if another AI tool can accept and use
87
LORAs then you can use it in there
88
but we won’t need it for this tutorial
89
for this but I will show you that.
90
So if I just grab these images of
91
myself I have here.
92
Here you can see I’ve taken lots of
93
these images.
94
Let me just show you these if I
95
bring them up.
96
These are just images of me standing.
97
These are just taken with an iPhone in
98
a living room here.
99
Front, back, front again, side, back, side, close
100
up, low angle, above my head, to the
101
side, back again, front, slightly side and closer
102
and I’ve got the back of my head,
103
low angle, close up, arms up, side, side,
104
back.
105
You get the gist.
106
I’ve given as much information as I can.
107
I have all these images of me.
108
So all I have to do is come
109
to here and let’s go add images and
110
inside the me folder I’m just gonna just
111
select all of these here.
112
Once again I just took these on an
113
iPhone.
114
I’ve got quite consistent lighting.
115
I’m stood in front of a big window
116
here.
117
No lots of shadows or anything or lots
118
of my features covered with a shadow.
119
Now the clothing you’re wearing, yes the model
120
will probably most likely take this or something
121
similar.
122
It might change the color of my jumper
123
unless I instruct it not to.
124
But I like to keep something plain so
125
it’s easy to change if I want to
126
and not too confusing.
127
Nothing with a collar that comes over part
128
of my face or a big hood or
129
something like that.
130
As clear as you can be.
131
So I’ve just got dark pants on here
132
and a jumper sweatshirt without a collar.
133
Long sleeve, that’s absolutely fine.
134
So all you do now is you go
135
along and add what we call a trigger
136
word.
137
So if you’re having a character like this
138
for me it’ll be Dan for example.
139
Call it whatever you want.
140
Click more and this is where you start
141
doing things like you can change the steps
142
but I wouldn’t.
143
Actually there’s nothing here I would change.
144
Create mode should be automatically selected and you
145
can change the number of steps it goes
146
on.
147
Basically it will tell you here the more
148
steps the better but default a thousand otherwise
149
you might be here a while is absolutely
150
fine.
151
I’ve never had an issue with that at
152
all.
153
Now all you hit is start.
154
Now for this program yes there is a
155
cost for this.
156
It’s not very expensive.
157
I’ve topped up and to generate quite a
158
few of these images cost I think less
159
than a dollar or so.
160
Well we’ll see in a moment.
161
I’ve got $7.86 on here.
162
So all you would do now is click
163
start and it’ll say right here processing as
164
it begins your Laura and then completed.
165
For this many images what have I got
166
here?
167
8, 16, 21 images.
168
It took less than 10 minutes to process.
169
Obviously that would depend on the site at
170
the time and such like.
171
All you do now is I can either
172
export this so you don’t worry about this
173
actually.
174
All I need to worry about is here.
175
Show the output you could copy from here.
176
Don’t be worried about seeing the code.
177
I don’t like to show people AI models
178
that have a lot of code in it.
179
It scares too many people.
180
Show files that’s fine.
181
Let’s use this.
182
So here is actually your Laura right here.
183
Download these, copy them and store them somewhere.
184
Because like I said sometimes in the future
185
and we’ll get on to bits like this.
186
If you want to use a Laura inside
187
another software, another AI program then this will
188
be saved.
189
So copy, save it.
190
We don’t need it for this tutorial but
191
it’s always good to have seen as you’ve
192
spent the time doing this.
193
Now I go run inference and this is
194
where we start to use the Laura we’ve
195
created to get a better version.
196
I’m gonna compare this after we’ve done this
197
to mid-journey and face swapping which was
198
the much easier way of doing this and
199
actually cheaper because Remaker here that I’ve shown
200
you is free for a lot of times
201
and then it costs like one credit which
202
I think you get 500 for $10 or
203
something.
204
So it’s very very cheap.
205
So let’s go back onto this.
206
You see it automatically filled this with Dan
207
once I said run inference.
208
Dan that’s that’s the character we’ve created.
209
So I can say something like image of
210
Dan standing in the street sunny day looking
211
at camera.
212
Not giving it much detail at all.
213
I haven’t told it what it’s wearing.
214
Now if you’re doing this for consistency and
215
we know the pictures that I uploaded there
216
had like a beige jumper on and some
217
like dark grey pants and things.
218
You can right now I could say wearing
219
a white beige jumper and dark pants and
220
it will make sure that just make sure
221
it’s taking it from the Laura and the
222
images that you uploaded.
223
You don’t have to.
224
Let’s leave it.
225
I like to play with this.
226
Let’s leave and see what happens.
227
Pretty basic pretty basic instructions.
228
Now we’ve used Flux before.
229
So this is do you remember in the
230
images section and video I’ve shown you tools
231
that take AI tools that take kind of
232
a white label or using another AI model
233
with inside itself and that’s what this is.
234
Inside Val here we are using Flux.
235
So we can generate the image and it’ll
236
be generated through Flux.
237
You’ll see that Flux’s images aren’t quite I
238
don’t like them quite as much as mid
239
journey but you are able to get them
240
very accurate to character because you can do
241
this with Laura.
242
Before I come down to the default size
243
that I want to use here Laura scale
244
this is default one absolutely fine.
245
The scale is basically as it says here
246
the weight that you put on here is
247
how much it’s taking from there.
248
The trouble is if I take too much
249
or too little then it’s going to be
250
drawing too much or too little from the
251
original Laura image that you’ve got.
252
So on the one scale it’ll rather than
253
put me outside I might actually be in
254
my living room again if you put on
255
too much you put on too little it
256
might not look like the character.
257
Keep it a default at one at least
258
to start with and then you can play
259
with this.
260
I want this of course 16 9 I
261
want that like that and then the only
262
other thing I have to worry about is
263
I keep everything default.
264
None of this matters except the number of
265
images I want.
266
I want to generate four.
267
Let’s do four images.
268
Safety checker is checking if there’s anything that
269
shouldn’t be allowed inside Flux.
270
Adult material anything offensive or anything like that
271
or or violence this kind of this kind
272
of thing.
273
I can select an output format if you
274
want it as JPEG PNG for this tutorial
275
just leave it as that and I can
276
click run.
277
Now let’s take a look at these shall
278
we.
279
I’ll just open these up one at a
280
time.
281
This one look that looks just like me.
282
Once again like I said it’s wearing the
283
same jumper as I put on here nice
284
bit of lens flare but it’s not the
285
same color.
286
I didn’t tell it that in the prompting
287
that we gave.
288
It’s exactly the same jumper just grey but
289
that is exactly me.
290
That’s perfect.
291
Alright let’s have a look at this next
292
one.
293
It’s giving me a short sleeve here while
294
I’m rolling my sleeves up.
295
A little bit slimmer in this one than
296
the last one.
297
It’s funny how when it can’t take information
298
how it puts you either over or under
299
your weight that you are but that’s definitely
300
my face.
301
It does look a little soft.
302
Let’s have a look at the others.
303
Yep definitely me.
304
It’s almost like it’s perfectly Photoshop taking my
305
face.
306
Put it on here inside the street but
307
the lighting all matches the location I’m on.
308
Really good.
309
Let’s look at this last one.
310
That’s exactly me.
311
Look how much more weight I have here
312
than say that first image.
313
It’s funny how AI does that.
314
So there the full model.
315
Look that looks exactly.
316
So if you’re trying to have a character
317
and I wanted me I don’t know as
318
a knight of the round table then I
319
could create that exactly.
320
In fact that sounds like a lot of
321
fun to do.
322
I can see that’s gone down $7.72.
323
What was it on $7.80 something before?
324
$0.10 or so for that.
325
So let’s do image of Dan standing in
326
front of a castle.
327
He is a knight wearing full armor.
328
Moody, gritty.
329
I’m gonna give it those adjectives right there.
330
Let’s run with exactly the same thing.
331
Let’s run that.
332
Okay let’s take a look at these.
333
Really nice.
334
All right not so much moody and gritty.
335
Like I said mid-journey is a little
336
bit better at giving moody gritty cinematic images.
337
These are always a little bit flat I
338
find.
339
But look I’m a knight in armor in
340
front of a castle.
341
Really good.
342
A bit of a Disney style castle rather
343
than the British castles that I’m used to
344
or this Germanic style they have with these
345
pointy roofs.
346
But really really nice.
347
I didn’t give it much detail of course
348
out of the castle.
349
Oh this one looks a little bit more
350
like I’m used to at home.
351
Saying that as if I live in a
352
castle.
353
I don’t I promise.
354
So yeah really nice.
355
Got a hoodie on here by the looks
356
of it.
357
Maybe the knights had hoodies.
358
Not sure.
359
But perfect.
360
It’s exactly me.
361
We’re looking at this for here.
362
The face of this.
363
Okay so let’s compare that I guess you
364
want to see.
365
Let’s compare to the old way we did
366
here.
367
So of course I’ve uploaded this before.
368
This person.
369
This same image we used.
370
So here’s the same image.
371
Just one of them though that I uploaded
372
to here for when I was training the
373
Laura model.
374
So I can take this and I will
375
just say that this man is a knight
376
standing in front of a castle.
377
Let’s do moody gritty.
378
And yeah let’s run that.
379
Oh I forgot.
380
I love it when I make mistakes because
381
I said I’m a knight but didn’t say
382
I’m wearing full armor this time.
383
So it’s got me just in my jumper
384
stood in front of a castle.
385
That is nice.
386
We can just quickly take a look though.
387
You can see it’s close to my face
388
but not quite.
389
Okay let’s do that one more time.
390
Just select that.
391
Select that.
392
Here’s a knight in full armor standing in
393
front of a castle moody gritty.
394
Anyhow that mid journey is really not taking
395
to this.
396
So let’s work out the different way to
397
do this.
398
Okay let’s take this image right here.
399
Well I’ll compare these side by side to
400
the one I did of me standing in
401
front of a street quickly.
402
Let’s do this.
403
This man standing in the street sunny day
404
looking at camera.
405
Okay and in the meantime let me take
406
this and let me go editor.
407
So it looks somewhat like me but not
408
quite.
409
Okay so I cover all that in edit.
410
I remove this to make sure it’s not
411
taken from there and I say knight in
412
medieval.
413
I hope I spelled that wrong.
414
It’s okay.
415
It knows.
416
So here are those images of me in
417
the street just like we had from before.
418
Let’s download that and the alternative way of
419
course was to just come into this is
420
the much quicker way to do it but
421
it might be a bit soft.
422
Take that image we just got from mid
423
journey in here.
424
This one.
425
Let’s do face swap which I’ve shown you
426
earlier in the image section.
427
Let’s do swap and have a look at
428
that to see if this way renders as
429
good results as using a Laura.
430
I don’t think it will do.
431
These are super accurate and you can use
432
them across multiple tools.
433
It’s just a little bit more of the
434
technical way to do it.
435
Let’s have a look.
436
Yes that’s definitely me.
437
It is me slightly less soft and the
438
face mix here slightly different face shape.
439
It hasn’t taken my exact.
440
We talked about using the same face shape
441
that we need but it passes if you’re
442
trying to have character consistency but not as
443
good as if I was to use here
444
in the Laura.
445
Now let’s also come back to okay here
446
we go medieval knight in armor.
447
Here’s me again.
448
Now I’m wearing full armor.
449
I was able to do that with the
450
editing.
451
Let’s download that and in exactly the same
452
way I’m going to swap this with me
453
in armor and let’s see what the result
454
looks like and here I am.
455
So there’s me.
456
That’s pretty good and here is the Lux
457
version right there.
458
That’s me and that’s me.
459
They’re both definitely me.
460
Perhaps the lighting is slightly more consistent with
461
the shot than it is here but it
462
works pretty well doing it this way.
463
So they both work and I’m a big
464
fan of mid journey and doing it this
465
way but if you want to a lot
466
of people obviously fans if you’re getting really
467
in in-depth here with AI of using
468
Laura’s and it can be spread across to
469
different platforms.
470
Also this is how you use it and
471
this is how you create a Laura.
472
So I wanted to show you that because
473
some people were asking hey I’ve heard a
474
lot about Laura’s.
475
What are they with AI?
476
Well this is what a Laura is.
477
There are other things we can get to
478
later on and another update I’ll do where
479
we could take an image from here a
480
scene you could create in another platform add
481
it into here.
482
So I’ll see you once again really soon
483
on an update.
484
Once again as I keep updating this course
485
let me know if there’s something in particular
486
you want to know and as I keep
487
exploring and finding things I will add them
488
and as tools update I will update you.
489
See you in the lecture again really really
490
soon.
— AI Video Creation : Introduction & My Top 3 Tools —
1
So, you’re finally here, we’re onto the video generation section, which lots of you have
2
probably been looking forward to.
3
Some of you would have skipped across to this and you’ll be missing some integral parts
4
earlier for how we make this happen.
5
But this is a very exciting part.
6
I’m going to share with you now, you have access to this page that I’ve created, especially
7
for this section, you have access to it being on the course.
8
This page right here, AIvideo.school, AI Video Tools.
9
And if you scroll down, there’s a little bit about it right here.
10
Every single tool we’re going to look at from Runway and then there’s a Luma, Hippia, Pica,
11
InVideo, Stable Diffusion, Kling, Kaber, Flux, RenderNet.
12
So everything you need, if you need to come to a tool and follow along step by step, how
13
to use that tool, some features that are on it and stuff.
14
It’s all here in text form, if that’s how you prefer to learn.
15
Now there are a lot of AI video tools out there and there are more being released every single week.
16
I’ve handpicked here for you the main ones that I use, my backup ones and then some extras.
17
So there’s way more on here than you could ever, ever want.
18
And I’ll keep adding to these.
19
So for example, here is my hierarchy, main tools I use, some backup and some extra.
20
I get this question a lot.
21
A lot of people ask me, what main tools should I be using?
22
Obviously there’s loads of lectures in here about loads of tools and you probably don’t
23
need all the tools.
24
You might need one, two, maximum three, depending on what you’re doing.
25
So here are the main tools that I use for video.
26
And in this section after this, I’m going to go through these three in depth.
27
And then these other ones, perhaps there’s a couple of lectures or perhaps there’s just
28
one lecture on some of these.
29
I think that 99% of you could probably do everything you want with these.
30
And there’s a difference between them in what they can do and price, as you will see.
31
So my main ones are runway.
32
I make the course project with that in here, mainly because I don’t think the video is
33
quite as realistic, although pretty good as Veo and Veo has lip sync involved in it.
34
And it also has included audio and everything like that.
35
But this is a lot more expensive than runway is.
36
And given that mid journey is even cheaper, there are $10 a month packages right now and
37
it can do video.
38
We’ve already seen it on images and the next lecture, I’m actually going to show you video.
39
So it’s the cheapest, but you have less control.
40
So I think control and realism goes one, two, three.
41
But price also goes best, more expensive, most expensive.
42
But I’ll go into these and you can see exactly what they have.
43
Now I use some of these other tools as backups, like for example, head dry, sometimes use
44
for lip syncing.
45
If I am doing an avatar, especially I will use a peek at art.
46
I like their frames where they merge frames together to make like a montage scene.
47
But you’ll see me use some of these, but pretty much you could do everything with these three.
48
But these are my backup that I go to if I’m having trouble ever using one of these.
49
But there’s nothing these can’t do with exception of Pika and Hedra for lip sync, maybe.
50
And then extra if you want to do in video, something completely different.
51
Kyber, Kling, RenderNet, Flux, Accord, Stable Diffusion.
52
I show you a lecture on these.
53
But the main question I get, I can’t go through all these lectures.
54
I can’t have multiple subscriptions.
55
What are the main ones?
56
Watch the lectures on Runway, VO3 and Mid Journey, and they will cover pretty much everything
57
that you all want to do, I expect.
58
So the next lecture, let’s get into exactly that.
59
Let me show you Mid Journey next.
60
Then we’re going to have, there’s one lecture on this in video.
61
Then there’s three or four, I think for Runway, I have explained how to use that.
62
And then VO3 has five or six going through exactly how to get the tool, what’s in the
63
tool, how to use it, different features, and things like that.
64
So head on over into this section and check out these tools.
65
Obviously, some of these I will update them as they need, or they haven’t been updated
66
if they’re not worthy, or I may remove some.
67
I can’t give you every single tool that’s out there.
68
And lots of tools you’ll find, for example, I think Canva and some other ones might be
69
using a white label, Filmora do, of VO3.
70
So sometimes you’re in another tool, like if you’re in Filmora, and it’s actually utilising
71
VO3 for a lot of their tools.
72
So I’m showing you native, these tools and their main sites that they’re using to be
73
able for you to be able to produce videos on these.
74
So let’s get into it next.
75
Let’s check out some of these main tools, and then I’ll show you some more later.
— Midjourney Video —
1
So mid-journey, creating video in mid-journey,
2
very exciting.
3
I think mid-journey is going to do
4
some great things with video.
5
It’s going to become an all-in-one
6
platform for image and video.
7
It’s so good at creating imagery, why would
8
it not do that?
9
Now, right now, at the time of recording
10
this, it’s relatively basic, but it does adhere
11
to…
12
I find that text doesn’t morph and things
13
like that when I’m using mid-journey.
14
So here’s some of the images that we
15
created previously, or here’s those images of me
16
in the last section with mid-journey.
17
Now, to create video, if you’re not going
18
to…
19
Here, let me go this.
20
I can easily put this, here’s a start
21
frame and end frame for this if I
22
wanted to, like maybe I start here and
23
end here, and I can describe how I
24
want it to transition.
25
But most of you are probably going to
26
be creating an image like this, this woman
27
right here.
28
And when you click on there, I’ve got
29
some options right here, and I’ll just do
30
these and we can watch them.
31
So there’s auto, which is pretty much automatically
32
animating.
33
So you don’t have any prompting prowess, any
34
dictation as to how it’s going to be.
35
For example, if I wanted a bird to
36
land on her shoulder, it’s not going to
37
do that, like if you would prompt for
38
it.
39
So I can go low motion and high
40
motion.
41
Let’s compare that.
42
And then we go loop, low motion, high
43
motion, and we can check that.
44
Now, if you remember back in our settings
45
from the last section, when I’ve got video
46
batch size, it’s making two each time.
47
So I should get two generations for each
48
one of these.
49
So auto, low motion, high motion, slow motion,
50
high motion in looping.
51
Let’s take a little look at that and
52
we can compare some of these.
53
Okay, these are finished generating now, and you
54
can see right here what it is.
55
So if I scroll up, I can see
56
that this one was a looping, is low
57
motion, high motion.
58
You see that loop right there, loop video,
59
video, motion low, motion high, motion low, and
60
such like.
61
Now, motion refers to both the character and
62
the camera movement and background movement.
63
So if I just hover over, if I
64
got my cursor here on this bar, it
65
starts moving and you can have a look
66
at these or click and you can see
67
them larger.
68
So on motion low, let’s compare these.
69
We can see that the motion is almost
70
slow motion as she moves.
71
Head turns slowly, very good.
72
Or this one, she keeps looking straight at
73
camera and then a turn, good realistic bounce
74
of the hair there.
75
It’s really nice, really nice.
76
It’s a really good image.
77
Okay, so let’s take a little look.
78
This one is high motion.
79
So let’s compare that.
80
You can see that she’s walking more and
81
there’s definitely more movement.
82
The camera almost feels handheld.
83
It’s like a music video, isn’t it?
84
Like it’s definitely following her really well, really
85
good.
86
Let’s take a look at this high motion.
87
So she doesn’t stay in spot.
88
It’s not slow motion.
89
It’s kind of, she’s crossing the street and
90
there’s people walking in the background.
91
Nice, really good.
92
Now compare those to looping.
93
Now, looping you might want to use for,
94
I don’t know, social medias or something.
95
It means the start and the end frame
96
are the same so it can continuously loop.
97
Let’s have a look at these on slow
98
motion right here, on low motion, sorry.
99
So she turns, looks back.
100
I never think these are really realistic, but
101
maybe that’s not the point of them.
102
They are to be used for shorts or
103
something.
104
You might be using these on TikTok and
105
things.
106
So this one is low motion and here’s
107
high motion.
108
So she turns more all the way around
109
and then looks back and then it starts
110
again.
111
And the last one right here, she turns
112
more, not that realistic.
113
I would never use looping unless you really,
114
really need it, obviously.
115
So the other thing to do was, let’s
116
go back and use the same image to
117
compare these.
118
Animate manually.
119
So now this woman’s in New York City.
120
I can say a woman walks forwards.
121
Let’s run that.
122
Let’s do something completely obscure here.
123
Let’s go woman smiles and walks.
124
These are stuff you might do.
125
Let’s do something weird and see if Mid
126
Journey can deal with this because it’s not
127
as responsive, I think, as something like Runway
128
VO3.
129
So let’s go a bird lands on the
130
woman’s shoulder.
131
Really out there.
132
Okay, let’s hit and run and have a
133
look at these.
134
Okay, well, these are generating.
135
Oh, this is good that I did this
136
here.
137
School boy error right there.
138
So woman smiles and walks.
139
This is good, this happens.
140
I can see because people go, why the
141
heck did that do something so random?
142
See, it wasn’t our woman right here.
143
When I said animate manually, when I was
144
saying, hey, the woman smiles, she wasn’t in
145
here.
146
The start frame wasn’t selected.
147
So let’s go back and say woman smiles
148
and walks.
149
Okay, hit.
150
And let’s do the same with our funky
151
response.
152
A bird lands on this woman’s shoulder.
153
And this is my starting frame like this.
154
Let’s run that.
155
Okay, so here’s our first manual.
156
A woman walks forwards.
157
Nice.
158
Now forwards is presented as forwards towards camera
159
and the camera moves.
160
But these are really nice shots, aren’t they?
161
Look, she walks in, she comes close to
162
camera and looks in.
163
Nice.
164
She’s definitely walking forward and the camera comes
165
to meet her.
166
Really nice, actually.
167
Mid Journey does a really good job at
168
using this.
169
So it’s definitely an option for you.
170
Here’s a woman smiles and walks.
171
Definitely did it wrong because I didn’t have
172
the reference.
173
Here’s a woman on the shoulder.
174
Okay, so a woman smiles and walks.
175
Let’s wait for these to generate.
176
Okay, here she is smiling.
177
Perfect.
178
Definitely responded.
179
She walks forwards and she smiles.
180
Let’s have a look at the next one
181
there.
182
She walks forward.
183
Does she smile?
184
Yeah.
185
Perfect.
186
Look how good Mid Journey is doing at
187
this.
188
I really like this.
189
You could actually have Mid Journey as your
190
sole tool depending on what you want the
191
video for and quality and how much control
192
you need.
193
If you don’t need that much control of
194
things then it might actually be the only
195
tool that you need here.
196
It depends and the budget is very friendly
197
for Mid Journey.
198
Now, the very obscure one.
199
Look at this bird landing.
200
So it even responded to that.
201
It’s just on 68%.
202
It’s going to generate in a moment.
203
I was really putting Mid Journey to the
204
test because when they first launched video, it
205
just had auto, low motion, high motion and
206
now you can manually do it.
207
I wanted to see because even in runway
208
sometimes when you say this, you don’t get
209
the result that you want.
210
Mid Journey is responding to this.
211
I want to see this bird land on
212
her shoulder.
213
Okay.
214
Yeah.
215
Now, I didn’t describe the bird.
216
I didn’t say a small white bird.
217
So it’s just done what it wanted here.
218
Let’s have a look.
219
A bird definitely landed on her shoulder and
220
the next one.
221
Yeah, a bird lands.
222
More than one bird comes in.
223
Wow.
224
It even responded to that.
225
Mid Journey is very impressive.
226
Now, I think it’s really going to push
227
forward on video.
228
It’s going to be your all-in-one
229
tool, which runway has become.
230
I’ve actually updated the runway lectures.
231
It used to be you drag in an
232
image and then animate it, but now you
233
create image and turn that to video.
234
All the top models are really becoming like
235
this.
236
So you can actually have great consistency inside
237
Mid Journey, creating your image with the same
238
person using Omni Reference, for example, that we
239
spoke about in the image section.
240
Now, you can turn these to video and
241
you can even tell it some quite obscure
242
things that you want it to do, like
243
a bird landed on the shoulder.
244
That bird wasn’t even in the original image.
245
I’m adding extra imagery.
246
Because Mid Journey is good at making imagery,
247
the videos come out really good.
248
I’m very impressed by Mid Journey video and
249
for a price that you’re getting for this,
250
I mean, I’m on the basic plan here.
251
You can say I’m on $10 a month
252
and I’ve got images or the editing and
253
video inside this.
254
It’s a really popular and good tool.
255
Mid Journey never fails.
256
It’s definitely a staple in my arsenal of
257
AI tools.
258
OK, I’ll see you in another lecture.
259
Let’s get on and look at some more
260
video tools.
— Runway ML – Introduction and Access —
1
Now, let’s talk about a runway, a really amazing and great AI video tool, you will have seen
2
me use this. And if you are seeing the course project going along, you’ll see me use runway
3
for lots of the video, as well as later showing you VO and cling and stuff. But I use this
4
in the course projects, you’ll see an older version, this is the latest version where
5
there is runway 4.5. And there are other tools available in here. So you’ll see me in the
6
projects, perhaps use an older version, but I’m going to teach you now, the newest version
7
principles are the same, just layout and options are slightly different. So this is the latest
8
version of runway. So to get access to runway head to app dot runway ml.com, or you can
9
follow the links in the course pages, and you can click straight through to them and
10
come through to the dashboard right here. Now, there are lots of other things on here,
11
you can see previous sessions about apps and anything you’ve shared and stuff like that.
12
But not to be confused by it, let’s just go straight over into here. These are the main
13
things you’ll want to use. And I’ll be showing you here generating image video, act two.
14
So we can just click on any one of these. And you’ll come up with a page something like
15
this, obviously, this promo video on the right hand side, this may change, but you’ll come
16
to a page that looks like this. Now I’m going to go over the layout properly next time.
17
But from here, this is where you can select anything if you want to create an image, video,
18
audio, or if you are selecting which version and tool to be on, I will show you all of
19
that later. Now for access to this, obviously, when you if you don’t have an account for
20
runway, you just click sign up and you can sign in with your Gmail or any other email
21
you want. And then of course, to use it, you’ll want to have a plan there may be depending
22
on when you access this, there may be a free trial, it depends, I think, obviously on when
23
if the offer is available, and your location on where you are. Now, roughly speaking, you’ll
24
get credits based on the plan you’re on either pro unlimited, I’ll show you those shortly.
25
You can come over to there is the runway help center, which is really good, has stuff all
26
about prompting and things which I’ll show you later. If you go to help dot runway ml.com.
27
Here’s how credits work. And it says that on the different plans, for example, on the
28
pro plan, unlimited, unlimited is unlimited, but also with credits, there are this number
29
of credits available, what’s a credit worth? Well, it depends on what you’re using it for.
30
If you’re using for example, text to video image to video, then per second, you’re charged
31
12 credits. So depending on how long your videos are, how much you’re creating and in
32
what tool, there are different prices for that, obviously a little bit more for video
33
than it is for image. And also audio generation, I think is the cheapest based on this, and
34
upscaling to 4k and things. So how much do I get for my credits? If you know, for example,
35
you’ve got 2250 credits, let’s say per second 12 credits, if I was doing a 10 second video
36
120 credits, so maybe 10 of those would be 1250. So roughly double 2010 second clips.
37
But just as an example, if you are creating a lot on here, then you’ll probably want to
38
be on the unlimited plan. Let me show you the plans. Now I’m over here checking out
39
obviously there’s plans for yearly and monthly. So monthly on pro 2500 credits, $35 a month,
40
you get access to all of the models and tools and everything that you could want. There
41
is also $15 a month with access slightly more limited and less credits or unlimited if you
42
want for 95 bucks a month, then you access to everything but it’s unlimited image and
43
video generations. There’s still the credits right here, but you have unlimited access
44
to image and video. So depending on how much you’re going to use and what your plans are,
45
here’s some options. Obviously, these prices may change by the time you’re watching this
46
and shown in your local currency. And the difference between yearly and monthly obviously
47
a slightly discount on there if you’re subscribing for a yearly plan. So that was everything
48
for let me show you access and how to get the tools. It’s a really great tool because
49
not only have you got for example, if I’m on video right here, I’ve got access to use
50
runway which is the top model right here. I’ve also got access to use cling which I
51
talk about and show you lectures later using it natively inside cling but you can use cling
52
inside here and you can use VO again I’ve got lectures using VO inside their native
53
platform for flow but you got access to VO in here open AI Sora that is and also WAN.
54
So there are options to use other models inside here as well as using runways own model themselves
55
and I’ll show you the differences between these and also with image what’s needed for
56
if you’re using different models inside here for image and video. Let’s get into it. Let
57
me just explain this layout so you’re not lost in the next lecture and then let’s get on creating.
— Runway ML 4.5 – Layout Explained —
1
Now the layout of Runway is fairly simple but just so you are not lost. I’m on this page here. I’m
2
just going to come down to the bottom right here. See if I hover over here and it says dashboard. I
3
can click dashboard and I come over to the dashboard which is basically the entrance to the site
4
where there’s all these different options down here. You don’t really need to know any of these or I
5
don’t really use them.
6
You can look at explore and see what other people are doing and creating and you can see different
7
models on here. But if I just head back to dashboard, this right here new session, if you’ve got a
8
previous session you was working on and you want to access it again, just click sessions over here
9
and you can go back to that.
10
But I can click any one of these because I have access to it. If I go to image, if I go to image
11
there, then I’ve got image, video and audio at the top and I can go image, video and audio all inside the top here.
12
So if I want to create my image and I want to turn my image to video, then I can do that here. If I
13
want to just create a video and go text to video, then I can create that here. I can use images I
14
create in here, turn them to video, add references, etc. We’ll do all of this in the next lecture. I
15
just wanted to make sure you understood the layout. Also, if you wanted to make any audio that’s at
16
the top right here.
17
Now, there is also something called Act 2, which I get to right here. If I open up Act 2, where you
18
can basically have an image of whatever it is, perhaps as an image of this woman or anyone you want,
19
and you can use your own image of yourself. You can just open up your webcam, for example, and you
20
can give a gesture, a movement, a smile, whatever it is you want your character to do. And Runway
21
will make the character move in the same way that you move a really great tool.
22
Now, if I’m back in, for example, image right here, this is where I will be giving my prompting and
23
we get through to prompting in a moment. At each time, I’m going to show you that this way I can
24
prompt upload my images. And then down the bottom here is the same. If I go to video, if I go to
25
image at the bottom right here, this is where I choose my settings.
26
For example, how many outputs that I want per image, what size is the image 69 or perhaps it’s for a
27
real or something. How big this is 1K, 2K, 4K. And also, which model I’m going to use. Am I using
28
Nano Banana? Am I using Gen 4, which is Runway? I’m using chat GPT or seed dream. And then I can generate down here.
29
The same with video. You have the option to upload a start frame or here’s where I prompt and I can
30
just do prompt text to video or I could prompt for the image, turn it into video and again, tell
31
them the orientation for this 69 or perhaps 916, whichever it is that you want. And also how long
32
they’re going to be 5 seconds, 10 seconds and any presets.
33
Now, I don’t use these because I use create image and prompt, but there are presets. For example,
34
you can tell it to pan right, pan left, push in, pull back for camera movements. And it gives you a
35
little example of what these are like a whip pan right there, pull back arc. So you can use these
36
presets, but I tend not to. And I tend to prompt for them, but the option is available there for you.
37
Now, if you do need to access anything else quickly, here’s where you can go to your references,
38
upscale, animate frames, act two. We get to all of these shortly. So this is the layout of
39
everything. Let’s get into this step by step. First, I’m going to create an image for this, which
40
I’m going to turn to video. So let’s go through the image options in Runway and creating these.
— Runway ML 4.5 – Images (text-to-image, references and tools) —
1
Now, first, I’m still here in image here. First, let’s create an image inside runway,
2
which we can turn into video. So it’s all in one place, get your image, turn this to
3
video, create your image, create multiple images. Now, there’s different ways to think
4
about this, I can either just text and do a text prompt to turn to image, or I can upload
5
an image. And I can use that as a reference or use anything you’ve created previously,
6
which are down at the bottom, and use that image as a reference for another image. Or
7
I could upload, for example, an object in here and use that object as a reference that
8
I want inside my image. So we’ll go through these step by step and also the different
9
plan options, different tools, I mean, nano banana, or you could use runway gen four,
10
I can also use chat GPT, or C dream. So those are the different ways. Let’s first do text
11
to image. And then we’ll do image to image. So I’m going to create four images that are
12
often a little bit difficult for AI models to go with. So let’s test runway and see how
13
they cope with this. First, we need to understand prompting. Now, runway does have a really
14
great page here about texture image, AI ultimate guide again, in the help for runway, you can
15
just search is right here in resources, how to make an AI image with a text prompt. And
16
it goes down and goes over some basics, understanding your prompt, building from noise, checking
17
against your description. And it actually has some sets down here, like four parts of
18
a prompt subject, that’s who they are, a person, animal, etc, where they are, and what’s happening,
19
the style on how it should look. And then any technical details like the camera specifications,
20
rendering, 4k resolution, all that stuff, these four points right here. Now, we’ve gone
21
over prompting before, and you can obviously make your own prompt, but you don’t have to.
22
And the way to work on this really, the best way quickest way is to use any kind of AI
23
generator, for example, chat GPT, and you can ask it to create a prompt for you. So
24
I’m going to have four images here, four different styles, I’m going to work on, I’m going to
25
create a woman walking from behind through New York City Times Square, I’m gonna have
26
it raining with puddles on the floor, because I want to make sure it can get water and reflections,
27
as well as things like hair and movement later for video and stuff. Then I’m gonna have a
28
man turning in Central Park, that’s going to be a difficult video for later. So I want
29
to have an image ready for that. I’m also going to do an animation Pixar style dog playing
30
with a ball because again, the image might be fine, but the video return later may have
31
a challenge. So let’s challenge it there. And also a car drifting. So let’s do a car
32
drifting because car movement can be something that’s tricky with AI. So let’s run and play
33
with these in chat GPT, I’m pretty much just going to hold down and prompt here, I need
34
a text prompt for runway creating an image from text, text to image period, I would like
35
an image of a woman walking through Times Square, New York down the sidewalk, period,
36
it is daytime, but it is raining and there are puddles on the floor, period. She has
37
long brown hair and wearing a black winter jacket, period. This is a wide shot see full
38
body, period, high definition 4k ultra realistic. Now here’s my prompt and I’ve asked it to
39
create a prompt for runway so we’ll know what to make a good prompt for runway. Let me just
40
run that. And here is created my prompt for me ultra realistic wide shot of women yet
41
wet pavement sure cinematic street style. Great. Make sure you do read it through. Make
42
sure there are no mistakes or anything inside here. If I head back over to runway now, I
43
can paste this in. Now I want to show you something because right now and I’m also selected
44
on Nano Banana Pro, which is from Google. If I go to runway themselves, their own AI
45
image tool, and I try to hit generate, it says a reference image is required for text
46
to video. It wants a reference image. So it wants me to upload a reference image. Now
47
there is a little bit of a hack right here that you can do. If I just come in up here,
48
you see this like draw symbol that happens. I can click that and it says, OK, can you
49
sketch something? If I choose this, I could sketch something as a reference. You could
50
try and draw out a woman walking from behind. But actually, if I just draw anything right
51
here and export the sketch now, we’ll see that it’s already here in my recent and is
52
up here already done. And now I can generate an image. So let’s generate that and see what
53
runway does. Now, once again, just to remind you, I’ve got four selected here. My settings
54
are on sixteen nine. I’m getting four images for this. Now you can see, even though my
55
sketch was just a line, it has generated images exactly as I described. So if you don’t have
56
a reference image, don’t want one. You can just do that to get around that. So if I look
57
at these images, I can have a look at these and I can just flip through these and see
58
how they are. They look good. The pavement’s definitely wet. There’s not really puddles
59
on there, but it’s definitely been raining. OK. And there’s slightly different color variations
60
on it, slightly over to the right there, center, slightly brighter. But it looks really nice.
61
It looks like a very realistic image. Obviously, the cars are going the same way. They’re kind
62
of close. Those two looks good. The reflections of the lights over here inside here are really
63
nice. Now, let’s just compare that. If I just remove this there, I’m going to just
64
compare that to Nano Banana Pro quickly. Let’s generate those. OK. And these are the images
65
from Nano Banana. Let’s take a quick look at these. Definitely more rainy. I’m not sure
66
how realistic that bit is there. Yeah, I can see rain on her back there. Her hands a little
67
wet. She’s walking. You can see the water in this puddle. Very nice. This one’s really
68
nice as she’s walking. I can definitely see the water coming down. And this is ultra realistic.
69
Really good. The cars are all going one direction. Yeah, perfect. And the same with the last
70
one. This is really nice. See the water coming off of the boot right there. If I scroll down
71
and take a look at that. Really nice. So compare these two models side by side. Here’s a Nano
72
Banana one. Here is Runway. There’s a little bit more depth in color, I think, in Nano
73
Bananas response. This looks a little bit more clinical from Runway, but both still
74
nice images. Now, just while we’re here, let’s do this and do a chat GPT. Let’s run that
75
one. And let’s also while we’re here, let’s run Seed Dream by ByteDance and let’s generate
76
that. All right, here’s the generations from a chat GPT. Let’s take a look at these. Nice.
77
I can see the water can be a reflection here is really nice of this sign. This one’s good.
78
I’m not sure about the road here. It’s like the pavement finishes. There’s a crossing
79
there. Yeah, maybe it is one of those ones up near Times Square. Here she’s walking in
80
the middle of the street, not the sidewalk and also walking on the street, not the sidewalk.
81
Let’s take a look at the last ones from Seed Dream. So this one has an angled shot on a
82
lot of these rather than central, but she’s definitely walking. There’s a puddle there.
83
Nice. I’m not sure how realistic this is. The splash is here, but her feet are there.
84
OK, I think that the best results either come from Nano Banana or Runway, depending if you
85
want more of a clinical type clean shot that it produced here or one with more depth and
86
color. Let me just remove all of this. I’m going to do another three different examples,
87
like I mentioned, for image. Let’s also change these back to because we’re inside the runway
88
course. Let’s do runway right there. I’m going to actually just scribble in anything right
89
here. Not that I’m going to just scribble in anything. Export sketch. I’m going to ask
90
chat GPT for another three different prompts that I want to use here. So the first one
91
is a black male aged 40 wearing a T-shirt and jeans in the middle of Central Park on
92
the grass. Period is a wide shot. He fills the screen. Period. He is smiling, happy with
93
his arms out wide. Period. Ultra realistic 4K. So I know I want this one because I want
94
to have him turning eventually because I want to see how the model does with turning because
95
that can often be an issue. So I want his arms out and I want him to turn around when
96
I turn this to video later. So let’s run that. Get a prompt for this. Whilst I’m here, I’m
97
going to just prompt for the next one, a Pixar style animation of a dog playing inside
98
a living room of a house with a red ball. Period. The dog is small, brown and fluffy.
99
Let’s run that one. A car in the middle of the street in downtown Tokyo at night. Period.
100
The car is a drift car and about to drift around a corner. Period. And let’s run that.
101
And again, I want to see how it does with image because I know for video that could
102
be some challenges for that later. So let’s go up and let’s first get our first prompt
103
here and see how it does with this man. Generate. That’s running. Let me go and grab the next
104
prompt for the animated dog and then for the car. Let’s run that in. OK, let’s take a look
105
at these images right here. Man with his arms out. How realistic is this? It looks a little
106
bit washed, like super smooth. Let’s take a look at this. Yeah, same does look a little
107
bit smooth and AI style, a little bit CGI style. See on the jeans here, a little bit
108
smooth, but not bad. OK, let’s take a look at the next ones. So this was the dog Pixar
109
style, definitely animated dog. I’m not sure it’s just like Pixar style, but we have got
110
an animated dog. Let’s take a look at these again. Yes, yes and yes. All right. Interested
111
to see how this turns to video. And then we’ve got the car in downtown Tokyo. Definitely doesn’t
112
look ultra realistic. Looks a little bit AI if that’s a term. OK, great. I mean, these
113
do look nice. They are nice images. But if I do just compare them, if I just change this
114
model to Nano Banana Pro, it’s going to remove that reference here. And I’ve got, for example,
115
this is the car right here. Let’s run that. And these are my images inside Nano Banana.
116
So I still don’t have ultra realistic on here. Slightly more realistic for sure. Maybe because
117
it’s nighttime. We’re getting. Oh, this does look realistic with the reflections on here.
118
And these are the other options for that. OK, so that was comparing those. Now, what
119
can you do with these once you’ve created them? Well, I can if I want to. If I just
120
grab this image of a man, I’ve got options down here. I can vary that. So if I just click
121
that, I’ll show you. I can vary this shot and it starts to generate variations of it.
122
Let me go back up to one of these shots here. Or I could use this for video. And if I click
123
that, it opens up right here and you see it’s putting the image at the top here and then
124
I can prompt and I can create a video. But that’s for the next section. So I’ll show
125
you that later. Or, of course, I could use this as a reference. So if I click on here,
126
I can use as a reference. It populates both down here, my recent and up here. So if I
127
want to create another shot of this, I can use it as a reference, which is really, really
128
good. For example, I could say I want this shot from behind this man. Exact same location,
129
but a view from behind him. And let’s generate that. Now, if I come back down to the bottom
130
here, it started to use. Here’s my different variations of it. So you see, it just changed
131
pretty much got the subject still, but it’s changed the background slightly. Here’s the
132
angle change slightly, slightly wider, just different variations on it. So now using the
133
reference image, I’ve now got a view from behind him. Exact same location, exact same
134
man wearing exactly the same clothing, but a view from behind. Now, why is this super
135
important and a really good feature is because when I’m turning these the video, for example,
136
I could have it was actually this one up here. I could have this shot of a man and he’s smiling
137
with his arms up and maybe I want to cut to a different view of him. So if I want to cut
138
to behind him and generate a video there, I could generate both videos. And then when
139
I’m editing, put these together side by side and it would seem seamless. So if I had, for
140
example, that woman walking, I could have the woman walking and then I want a view of
141
her from the front or the side. I can use it as a reference, which is obviously crucial
142
right here. I can use it as a reference here and I can say I can remove, make sure you
143
remove the other image and I can say this woman, exact same location, but view from
144
the side close up and let’s run it. And now it’s changed the image. It hasn’t done it
145
really close up. But for example, this is the woman from the side. So I could be doing
146
it for slightly close up. I could re reference that image and say close up. So now I’ve got
147
my options that I can create multiple images using the same people, which helps with continuity,
148
obviously, and same location, just change the shot. So when I’m turning these into video,
149
I can go from one shot to the next one shot to the next. And it looks the same for continuity,
150
which is the biggest issue, I think, when creating video with AI. Now, the last thing
151
I want to show you inside here is obviously I can use references for this if I wanted to.
152
So I can upload an image of anything. Let’s say I upload this image of a French flag,
153
for example, on a pole. And I want someone to hold this, for example. So I can just say
154
a man age 30 wearing a blue winter jacket is outside the Eiffel Tower in Paris, and he is
155
holding and waving this flag, period. He is happy. It is a sunny day. It is a wide shot,
156
ultra realistic 4K. And it doesn’t matter if I was in Nano Banana or if I was on runway for this,
157
I can generate that. So let’s generate that image. And here are the images here. This one,
158
not so much. This one pretty good and oversized flag. This one works. Let’s have a look at that
159
blue jacket man’s happy outside the Eiffel Tower waving the flag. Now, obviously, for this example,
160
I’ve used a French flag and I could have just prompted for that. I could have prompted a
161
French flag and it would know it. But where this is useful is if you are making, for example,
162
a product video and you want to create a video, perhaps you own a product or working for a client
163
and they have a specific product, you can upload the product as a reference right here.
164
And then you could say, I want whatever shot it is holding with a shot of that product.
165
And you can have that each time to create images that you can turn to video with that product.
166
So that’s where that’s really, really useful. Now, that was everything I wanted to show you
167
inside images. So just to recap, you can just do text and do text to image and there’s runway
168
resources, use chat GPT or something to get these. If you’re inside runway, remember you
169
as a hack to do the sketch, if you want to use that or use a reference image or use one of the
170
other models inside here for text to video. If you have a reference image, for example,
171
either one you’ve created inside here, I can use it as a reference or, of course, you could drag a
172
reference image that you have made externally on another site or an image of yourself or anything
173
else you have permission for added in there. And I can use the image as reference to change it.
174
I can also use those images to keep consistency and change to other images of the same person
175
and location for continuity purposes. And then, of course, we’ve got the option to use that to
176
turn it to video or again as a reference to keep working on it. So that was images inside runway.
177
Now we’ve got that down. Let’s talk about video and start creating video with runway.
— Runway ML 4.5 – Video (text-to-vide, image-to-video, editing & lipsyncing) —
1
Now, let’s continue right where we left off. I’m still here at the top here, remember,
2
up inside image. I can just click over here to video. Okay, let’s go into video. Now,
3
this is still populated from when I did the last tutorial. So it will look like this.
4
And here are your options here. I can upload a start frame. So this is the way that most
5
AI filmmakers do this because you get more control of getting the image exactly that
6
you want. If you were to just text prompt to video, then of course, there’s a lot more
7
possibility for variation, create your image first and turn that to video. That’s the most
8
popular way and it aids with continuity. Also, I do have examples later using VO text to
9
video where continuity wasn’t important in the project, but for most projects, it will
10
be important. So you can upload a start frame where the video is going to start and then
11
I can text prompt right here in exactly the same way if I was an image and using references.
12
But this is your start frame for your video. Now, there are options down here. Obviously,
13
the size, like we mentioned, and also the duration of your shot and the tools down here.
14
So I’ve got Gen 4.5. There’s also the older versions, which are videos that are created
15
previously on these. There’s Kling, VO3, Sora and WAN. So I do have on Kling, VO and Sora.
16
I’ve got their own lectures of using those tools inside their native platforms later
17
in the course. So let’s concentrate in here on using Runway for this. Now, like I mentioned,
18
you can just go straight from text to video and there is a great prompting inside the
19
Runway Help Center, help.runway.ml. There is a text prompting guide and it has some
20
great breakdowns on here for visual components, motion components and prompt structure for
21
things. For example, the camera shot of the subject doing something in what environment
22
and then supporting component descriptions, stuff like that. And there’s a whole prompting
23
section, obviously. But you can also use ChatGPT in exactly the same way we were doing for
24
images or Gemini or anything else you are using for that. So I’m going to first we’re
25
going to do text to video, then image to video. So we’ve already created our images that I’ll
26
be turning to video shortly. But first, let’s try and prompt for these in for creating a
27
video instead of image first. And let’s compare these side by side. So if I just prompt in
28
exactly the same way, but I want it for video, create a prompt for Runway. This is for text
29
to video inside Runway.ml period. I want a wide shot of a female walking on the sidewalk
30
through Times Square period. It is a rainy day and there are puddles on the floor period.
31
She has long brown hair and wearing a black winter jacket period. It is a wide shot. See
32
full body period. Ultra realistic 4K high definition. So I can get the text prompt for
33
that. Let’s just run that. And here is the text prompt. Let’s just copy and paste that.
34
Let’s grab it over here. Let’s paste that in here. And now we’re going to use Runway.
35
For example, like I said, let’s just do a five second clip rather than 10 seconds for
36
the sake of this tutorial. Let’s keep it 69 and let’s generate and see how it does
37
with a text prompt and we can compare it. You remember that these were the images it
38
created. Let’s compare it when it just does text straight to video. OK, here is the video
39
right here. Let’s play this. Nice. She’s definitely walking through the side down the sidewalk.
40
The traffic’s going the right way. And so is the other people walking with an umbrella
41
there walking over the grid. You can see the rain coming down right here. Her hair moves
42
actually as if it’s slightly wet, but not really wet. And it does flow and bounce as
43
she walks. I can even see the rain on the side of her arm here. That’s really nice.
44
Now, it’s not a view from behind. Let me just check the original prompt we got right here.
45
Ultra realistic shot, wide shot of a woman, entire body visible. I’m not sure I didn’t
46
read through the prompt fully enough where it shows from behind. So let’s go back to
47
the prompt right here. Let me just do this again and go view from behind, shot from behind,
48
follow along from behind, period. Sometimes I just repeat that. Let’s generate. And here
49
is the shot view from behind the floor already looks really nice with these reflections.
50
Let’s play that and take a look. Yeah, she walks, even had a little bit of a pause there
51
and the camera came in as she goes across the street like she’s looking. That’s that
52
busy part in the center of Times Square. It looks like traffic moving the right direction.
53
Man crosses over. Hair is bouncing. It’s not that wet, but maybe it’s not raining too heavy.
54
Actually, I really love the camera movement coming in. That’s really, really nice. OK,
55
great. That’s really good. So that’s what we got when we did text to video. You see,
56
actually, the color is quite a lot of depth. If I go back to the image that runway created
57
for us of that woman, the colors were slightly more flat, weren’t they? Remember these?
58
So they weren’t really in a lot of depth compared to, say, the nano banana ones that
59
created. But the video it created from text was actually more like a nano banana image.
60
But let’s first do a text prompt for each of our four difficult ones when to do text to video.
61
Then I’m going to turn the images to video and see if it can happen.
62
So let me go back to here. I think where was my previous prompt? I had a black male age 40
63
wearing a T-shirt and jeans in the middle of Central Park, grass, wide shot, fills the screen,
64
he’s happy, smiling. Create a text prompt for text to video inside runway ML, period.
65
A black male age 40 wearing a T-shirt and jeans in the middle of Central Park on the grass,
66
period. It is a wide shot, period. He fills the screen, period. He is smiling,
67
happy, his arms out wide, period. The man turns around and looks back at camera,
68
period. Man turns, period. Realistic. OK, let’s run that and get the prompt.
69
Ultra realistic cinematic wide shot photo of a black man standing on the grass middle of Central
70
Park. Yes. OK, let’s grab that. It’s going to highlight all of this, going to paste it and
71
let’s generate. Now let’s get our other two prompts and we can run these simultaneously.
72
Please create me a text to video prompt for runway ML, period. A Pixar style animation of a
73
dog playing inside a living room of a house with a red ball, period. The dog is small,
74
brown and fluffy, period. The dog plays with the ball. Run that one. Create a prompt for
75
text to video inside runway ML on a video of a car in the middle of the street in downtown Tokyo
76
at night. The car is a drift car and drifts around the corner, period. Realistic. And run that.
77
OK, let’s grab the other prompts right here. Grab that first one for the dog and run it in here.
78
Generate and let’s grab the drifting car right here and generate. OK, let’s check these out.
79
Now, remember, this is text to video right here, not image. Let’s take a look. So the main key
80
here was I wanted the man to turn, which is a difficult thing to look realistic. So the man
81
turns and looks back at camera. Yeah, he definitely does. It’s slightly slow motion.
82
I’m not sure if the arm movement coming down was 100 percent realistic, but pretty good.
83
And he definitely turns it. Got it. It turns and looks back at camera exactly as I prompted for.
84
So that is the text to video. We can compare that shortly with the image to video.
85
OK, here is the text of video. Let’s take a look at that. OK, the dog is definitely playing with
86
the ball. The ending is nice. It kind of lifts the ball up there halfway through. But that ending
87
with the look at it looking up like that and resting its chin on the ball. That’s a really
88
nice image. That is really good. And it’s definitely Pixar style. Maybe older Pixar style
89
looks like the original Toy Story kind of style, but it definitely got the style and the ball
90
realistic movement. So the ball bounces, moves and it spins. You can see the light on it spinning.
91
That’s super good. And the hair on the dog, although we’re animated and we wanted that Pixar style,
92
it looks really, really nice. This is really good. OK, I like that. Now, here is a car drifting,
93
which is a difficult thing to do. Let’s see how runway coped with it. All right. Great. That first
94
part that I’m not too sure about. But this that’s really nice. And then the camera stops and goes
95
forward. That looks really, really good. Is it ultra realistic? It’s nighttime. So you get away
96
a little bit more slightly CGI. You might not believe it’s real, but that’s really good. The
97
movement come around that corner. Super good. I really I really like that. Well done runway.
98
That was really good. So those were all of the text to video. Let’s do a direct comparison and
99
see image to video. So if I have let’s choose a guy, this one right here. Actually, let’s start
100
on the right order. Let’s go back to our images generated of the woman walking through Times
101
Square. I like this one. Let’s use it on video. Just click use at the bottom here. Take away your
102
prompt and I’m going to say woman walks forward and the camera follows from behind. I’m going to
103
run that one. Let’s go to our guy in the park. Use it. Man turns around and looks back at camera.
104
Keep it really simple. These prompts for the sake of this tutorial. Let’s choose this one.
105
Dog plays with the ball. Generate and then the car drifting. Use car drifting through the street.
106
The car drifts and generate. All right. Let’s see how these do in comparison to text to video. Now
107
this is finished. Let’s have a look at the image, the video and see how the movement is for this
108
woman walks. Yeah, yeah, it’s realistic as a bounce to a hair. That’s always what I check
109
here. The bounce to the hair and a separate little bit of hair flows out here is the walking
110
realistic. Maybe not that very first bit, but then after a while, a bit of a stride in her step.
111
Also, she’s in heels. So it’s quite realistic for the movement on that. The car traffic is going the
112
right way. There was a guy walking in the middle of the street here. Perhaps my image had that.
113
Let me just go and check. I really want to see if that’s true. Let’s go back up to our images.
114
And I chose this one here. Oh, yeah, there is a guy. So it kept it consistent right there,
115
which is the main reason to use this, I think, image to video for consistency. So overall,
116
pretty good generation. The traffic’s going the right way. That’s often the main thing. People
117
waiting to cross the street. OK, nice. Let’s check out our other video. So this one, I wanted to make
118
a man turn. That’s often a difficult thing. He turns and back. Yeah, nice. He does give it doesn’t
119
turn. I didn’t prompt to turn with his arms out wide still. So it definitely turns. He moves
120
slightly and looks back at camera. Yeah. OK, did it. I’m not sure about the overall quality for
121
being realistic. I mean, definitely realistic. We’re just getting so picky with AI now that
122
is it ultra, ultra realistic. But the shadows, everything on his shirt as the creases move.
123
That’s another very difficult thing. Looks really good. Nice. OK, a dog playing with a ball. Let’s
124
play this. Yeah, definitely moves. I like the turning of the ball and the light on there.
125
Dog plays. Great. And then what happens to his paw? Yeah. Oh, yeah. The back paw is still there.
126
Perfect. There isn’t anything like morphing. That’s what we used to have back of AI tools.
127
Maybe in a year ago, you’d suddenly have like a missing leg or some morphing or something.
128
Is this as good as the text to video? I’m not sure. The text to video has some really nice
129
texture in it and some more depth, I think, to the video. But I think the main reason to use
130
image is for consistency. If you want this dog in multiple shots, then obviously use image for that.
131
Now, the car drifting. Let’s take a look at that. All right. It definitely drifts there.
132
I’m not sure about the opening, but the image is slightly more CGI than an original
133
than an ultra realistic one. But from there, moving works really nicely. But I don’t think
134
that’s as good as convincing as the this one, the text to video. I’m a huge fan of text to video.
135
For some reason, along a lot of tools, text to video just kind of gives the AI models a little
136
bit of leeway to create something. And what it does, it creates something often a little bit more
137
realistic. I see this in video also and other tools. So image to video is great because you
138
get consistency. But actually, I quite often find that text to video actually gives a more realistic
139
output. You just might not have consistency between characters or things like that.
140
Now, the other things you can do here, once I, for example, generated this video,
141
I can upscale it to 4K with Topaz AI. Let’s click that. And it’s generating.
142
You see me talk later about Topaz AI and using that as a separate tool now built into Runway.
143
Really good. I can also download this straight away as an MP4 or as a gift. I can share it like
144
it or if I click other apps, I can do things like edit video, use this character in Act 2.
145
So we get to Act 2 later. And if you want to use a character inside Act 2,
146
just remember, you can do it from here. Every time the video remove, I can lip sync this.
147
So I don’t suggest using lip sync inside Runway. But let me just go through this.
148
Let’s go for edit video. And now I can reshoot with words or image references. So if I like
149
that video and I want to reedit this, like change something, a ball, a kite goes through the back
150
or something different with this. For example, let’s test this. Change the man’s T-shirt to
151
white color and generate. So this is the upscaled version right here for using Topaz, which looks
152
really nice. Let’s take a little look at it. Yeah, you could suddenly see those lights just
153
come out. So, yeah, it does help with making it more realistic for sure. Yeah, really nice.
154
Perfect. OK, let’s have a look and see if the reediting worked. And this is the reedit right
155
here. Change the man’s shirt to white. Let’s play it through. Works really well, really nice. So if
156
you want to ever edit right here underneath apps, I can go to edit and then you can text prompt for
157
any editing that you want. Now, if we like this, I can also go lip sync. So let’s use lip sync for
158
this. And it’s got the man right here so I can just record myself or you can upload your own
159
audio file if you want to. So let’s record audio. Thanks for being here. I’m going to do something
160
very simple for that because he turns after this in the shot. Let’s have a little look at that.
161
You can play it back and then just say use that audio right there. Generate lip sync. Now, whilst
162
I’m waiting for that to load, let me show you something also inside apps right here. I can also
163
go use the current frame in image. So, for example, if I go to here, for example, he turns there. I
164
could go apps and I can go use current frame in image or in video. But let’s go to image right
165
there. And it’s populated it right there under images inside here. So now if I want to, I could
166
use that image. I could just change it slightly if I wanted to. I could say view this man from the
167
front. So then I could generate a video from that. So now when I’ve got the man turning here, when I
168
come to edit, he can turn. I can say now on the view from the front, generate a video for that
169
and they can sync seamlessly inside my edit. So it’s often good if you’ve got a video and you’re
170
going to cut somewhere. I can then change the shot by doing that using a frame or go straight
171
to video. And I can. But if you use images, I can say I want this man view from the front instead
172
and generate a video from that and put the two side by side in your edit to get seamless. So now
173
we’re getting to be a real video creator where we’re starting to think about our edit and how
174
I’m using one shot to the next for consistency. And Runway is a great tool for that. Oh, I got a
175
flag here. The light changes too much in this content. So if you do have an image with the
176
changes too much, Runway doesn’t like it for lip syncing because it’s trying to make an accurate
177
lip sync for your video. So let me try another one. OK, so what I’ve done to do the lip sync
178
and try it again is I’ve generated an image of a man at a desk here. I’ve turned that into video,
179
nothing with too much drastic movement or lighting changes. I’m going back to the app.
180
Let’s go to lip sync. And in exactly the same way, all I’m going to do is record a very basic
181
bit of audio for it to lip sync. So let’s record audio and let’s say hello and welcome to the
182
course. I’m glad to have you here. Period. OK, finished. Let’s use that. Let’s generate lip sync
183
and let’s wait for that and see how it does. OK, let’s play that and have a little listen.
184
Let’s turn this up. Let’s see. Hello and welcome to the course. I’m glad to have you here.
185
So that was the lip syncing. I don’t think that Runway does the best version of lip syncing
186
and I wouldn’t use it. Do remember that if you are generating a video and you are inside,
187
for example, clearing or video, you can actually text prompt to say say this in this accent and
188
you could then use something like 11 labs if you want to clean up. I talk about that later
189
in the course. The lip syncing I don’t think is great, but I did want to show you that.
190
So to recap video, we have got text to video, which we’ve done. I compared that to image to
191
video, which I think the only reason you want to use image to video. I don’t think the generations
192
are any better, but for consistency, then we can edit our video, upscale our video and lip sync
193
if you wish. So that’s everything inside there. Loads of things in tools and great. Of course,
194
if you’re making this for consistency to be able to use the image as a reference or part of your
195
video image as a reference to then turn it into an additional video for thinking about when you’re
196
editing to put these shots together. Great. OK, whilst we’re here, I’m going to show you
197
audio in the next lecture, which is very, very quick and act two.
— Runway ML 4.5 – Audio (for lipsyncing) —
1
Now this won’t be a long lecture because it’s a very simple tool here. Up here you’ve also got
2
audio if I click right here. So you’ve got all the different voices which I can play out and I can
3
listen to. So now if I have text in here for example if I want to have the text hello and
4
welcome to the course it’s really great to have you here. Let’s get on now and let’s create some
5
amazing AI video with RunwayML. And I’ve got my text in here now I can choose a voice whichever
6
I want. Maya is great let’s go generate. Okay and this is generated already that’s super quick
7
because it’s just audio. Let’s hit play. Hello and welcome to the course it’s really great to
8
have you here. Let’s get on now and let’s create some amazing AI video with RunwayML. Nice it’s
9
very realistic it’s very relatable I really like it really good. Now of course what you can do is
10
you can download this so if I was to download the video and then I could use it if I if I was inside
11
video for example if I had this shot right here so let’s go back here if I wanted to lip sync this
12
lip sync then I could upload my audio file and I could just download that one we’ve just created
13
right there add it into here and I could just hit generate so if you didn’t want to speak yourself
14
and have your own voice and generate that I could just hit generate lip sync and it would generate
15
the lip sync for the audio. Really nice very simple tutorial sorry it’s so short but just to
16
show you because a lot of people they have external tools like 11 labs for this or something
17
but you can generate text-to-speech inside RunwayML so I just wanted to show you that.
18
Okay let’s get on and show you Act 2 which I think is a really great feature
19
that you may want to be using.
— Runway ML – Act 2 —
1
Now the last thing I want to show you is Act 2, which is a really great tool inside Runway.
2
I think you’re going to really like this, using your own movement to influence what
3
a character does. So there’s multiple ways to get there. You can go back to Dashboard
4
where we first saw our options and there’s Act 2 there. Or if you already have a video
5
like this, I can go Apps and I can go use this performance in Act 2 or use as a character
6
in Act 2. So I want to use this character, this person. Let’s select that inside Act
7
2. And what it’s doing is it’s detecting the face of the person right here. Great. And
8
what this top one does right here, it allows you to use yourself no matter what location
9
you’re in. So if you’re just in your office, if you’re just on your laptop, that’s fine.
10
You can do that. So let me just show you if I just hit right here, let’s hit record.
11
You can also obviously upload a movement if you have another video that the movement you
12
want to apply to this, you can do that. I’m going to just hit record right here and it’s
13
going to pop up and there’s me right there. OK, so it says go from the waist up. You don’t
14
have to, especially for this shot. So let me just use myself here. Let me hit record.
15
It’s going to count down and then. OK, stop. And I’m going to use that and now it’s going
16
to add my movements to here, OK? And I’m going to add the movements to this. Obviously, a
17
similar shot is preferable. But so if you’re using if you’re creating video and you can’t
18
quite get your character to turn their head to look a certain way to laugh or smirk in
19
a certain way, you can actually tell it what to do by your own actions right here. So if
20
I just use this generator, let’s use that. And it’s now generating right down here. It’s
21
going to copy my movements of me just sat here on my laptop and it’s going to apply
22
it to the video image that you have. You can obviously do this with just image inside here.
23
Upload image right there. I can also choose a voice as if you talk and you can choose
24
voices in there. Change up your expressiveness if you want to for this and also your settings
25
like aspect ratio. OK, let’s remind ourself of the action right here in the top left.
26
So I’m looking forward. Then I laugh. Then I look one way, look the other. Then I look
27
up and I laugh again and stop. So the man is obviously looking slightly already that
28
way. So I’m not sure it’s going to turn fully or not. Let’s play this and have a look.
29
So pause and then laughs and then looks one way, then looks another way slightly.
30
Then looking up is slightly skewed and laugh again. So it works pretty well. And I think
31
with a different example, especially if you use an animation for this, it would be really
32
good because it would work with the animators with the animated face even better with realism.
33
Sometimes it does skew slightly like that, but you can see that it’s actually moving
34
the image perfectly right to left. Yeah, nice. I do do a bit of a funny face then.
35
I don’t think I quite look that way. I look fully this way. So it didn’t turn me fully
36
on there, but it does work pretty good. If you want to get yourself a character to just
37
smile, smirk, something like that in a certain way or laugh or cackle in a specific way,
38
you can act it out for it. And then, of course, you could change the voice from that to another
39
one. So that was act two. And that was Runway 4.5 and all the different options for image,
40
video, audio and act two.
— VEO 3: What’s Possible with Veo 3? Real Videos Going Viral Now —
1
Now, what can you use V03 for and what are people using it for? I love this bit when
2
we have a look at what some people are using V03 and probably using flow to create these
3
scenes, what they’re using it for. So let me just go through a few things because perhaps
4
you’re unaware of how good and amazing this software is. Let me show you some things that
5
are trending right now that people have been using V03 for. For example, this is a video
6
right here by the Door Brothers. It’s an influencer video, mocking influencers and what they’re
7
doing with crazy, crazy challenges. Let me play you some of this.
8
It would totally be better if we ran it. Men literally destroy everything and my girls
9
need to stop being so soft with these basic losers. Who even needs men, right? Anything
10
a man builds just gets destroyed by a different man.
11
This collapse is literally the perfect dip. I’m buying more right now.
12
So you’ve got here, you’ve got here influencers that are out there after there’s been a massive
13
collapse like nuclear bomb in the world. And they’re talking all different style of influence
14
to talk about how they’re going to buy Bitcoin, like protesting here, talking about here,
15
I think about protein and things whilst the world’s burning behind them and exploding.
16
So really funny, like mocking that. But look, main thing with V03, look how realistic they
17
look. If I hadn’t told you this was V03, a lot of these I would have just said are real.
18
Even the lip sync, audio, sounds, everything. This is all inside V03, by the way, including
19
the voices and sound effects, all with a text prompt. You’re getting the video, the person,
20
the action, the voice and the sounds behind all from one text prompt in V03. It’s incredible.
21
Another video that I’ve seen go viral, really good, 1.2 million views in three weeks is
22
this Bigfoot playing the banjo. Let me play you some of this.
23
Hey, y’all, Bigfoot here. This is my new single. Hope it don’t scare y’all off.
24
Amazing. You’ve got Bigfoot playing the banjo. Now that’s, I would watch that. And I’ve actually
25
been playing with some of that and creating some bits just like it myself inside here.
26
Here’s me actually earlier playing this exact thing right here.
27
Bigfoot here. This is my brand new song and I hope y’all like it.
28
I was able to create this, which is very similar to what you just saw. Able to create this
29
with one text prompt. Very simply done. In fact, I used the image and I was able to get
30
myself a good description using AI. I just copied and pasted and got this in seconds.
31
This is really incredible. Really good. Okay. What else are people doing? I’ve got more
32
impossible challenges here. Here’s street interviews with people so you could have people.
33
Look how realistic this is. It’s someone who believes that artificial intelligence is God.
34
So these people are not real. Look how incredibly real this looks on the way they’re talking,
35
interacting with each other. It’s a VO3 text to video prompt that’s made this. Super impressive.
36
Here’s another one. Impossible challenges. Google VO3 to prove one man is enough to fight
37
a gorilla. Okay. So you’ve got influencers fighting a gorilla, licking a Chernobyl. You
38
got a guy jumping out of a plane with no parachute. Yeah. Like ridiculous things that influencers
39
might do. Perfect. And then you’ve also got this look. ASMR. ASMR video is really hugely
40
popular and now you can create them all with VO3 of a text prompt. You could have someone
41
whispering into a mic and the audio going from one ear to the other. You can do that
42
in your editor or something like this. This person has made glass fruit that they’re going
43
to cut through. I want to watch this. Look. With sound effects. Oh, that is crazy satisfying.
44
I can imagine how these get so many views. So many views on here and TikTok and other
45
shorts. So here’s a few things that people are doing with VO3. Obviously, you could be
46
doing something serious and making yourself a short film, documentary, anything you wanted
47
to do, or these viral videos. It all depends what you want to make. But the main point
48
is they’re super realistic. You get audio, including background noise and voice. And
49
you do this all from a text prompt very, very quickly. Really impressive stuff. Let’s get
50
in and explore VO3.
— Veo 3 Explained: Overview & Comparison with Veo 2 —
1
So, I guess exactly, we need to understand exactly what is VO3 or VO from Google. Now
2
the next few videos are going to be, some are going to be short and sweet. They’re on
3
some of, I guess, the very basics that we can cover. So there’s no missed questions.
4
Someone’s asking, well, what exactly is VO, what’s Imogen, for example. I want to make
5
sure you have all the information. So I’ll divide these up over a few lectures so you
6
can skip them if you want to, if you have knowledge about what this is, or you can watch
7
them just so we’re all on the same page. So, the first one is, what is VO? What exactly
8
is VO? Now VO is Google’s AI model for video and there’s Imogen for image. So think of
9
VO as, this is how, this is Google’s name for their product, where they create video
10
where you can create amazing video. Actually, if you look at these examples, they’re really,
11
really good. I’m super impressed. And you’ve probably seen loads of videos on YouTube and
12
places I’ve seen this a few times. Look how realistic they are. And with audio already
13
included background noise and audio on the latest model VO3. Now I’ll quickly jump in
14
and show you exactly what it looks like when you’re creating. It’s going to look a bit
15
like this. I can have one, I could have had multiple variations of this, of whatever
16
it is I’m prompting for. I can prompt via text, frames, ingredients. I’ll show you all
17
of this. This is inside Google Labs, something called Flow, which I will show you. And there’s
18
also other places, maybe you want to use Gemini, or if you’re just using VO2, there’s WISC
19
and lots of other places where you can find VO3. But we’re talking about VO3 and VO3 is
20
best used inside Flow, but I’ll get to that shortly. Think of VO as the AI tool. And then
21
there’s lots of places like Flow, for example, of where you can use VO. And there’s also
22
third party ones too that are utilising, probably have a licensing it is with VO also, but Flow
23
is the main one. So VO, so on the same page is the AI video tool where you can use it
24
as multiple places. I’ll get to that in a couple of videos time. Now there is VO2 and
25
VO3. That’s what most of you will probably have heard. VO3 has recently come out with
26
some updates and they’re going to keep updating and updating. And I will show you these and
27
update them as and when they release. VO2 is a video model that basically, the difference
28
basically is one has background sounds with lip syncing VO3, and it’s a really good quality
29
and VO2 doesn’t. But the video is largely very similar. And I’m going to compare them
30
right now with you. Let’s give ourselves a really simple prompt. So I’m in the text
31
of video, but I’ll show you after this section where we get through the basics, then we start
32
making it. I’ll show you exactly what these different options are, but let’s do text to
33
video now. Now I’m going to have a whole lecture on prompting and how to make this better and
34
what it is exactly you need for the sake of this tutorial. For this lecture, I’m going
35
to do an old man sat in an old Irish bar. His face is worn and wrinkled. He says, hi,
36
I’m John and I love VO3. So I’m going to run this first one on. Let’s do it. If you click
37
over here, let’s do on VO3 quality. I’ll get just one output for this. Let’s hit that. And
38
then I’m going to use the same prompt right here. Just populate that here and let’s do it on VO2.
39
We might as well do quality and we can test these out side by side. Now they finished,
40
they finished it pretty much exactly the same time. There wasn’t much in it being VO3 or two,
41
despite one having audio and one not. You’ll see that the VO2 one right here has subtitles.
42
You’ll sometimes see this even on VO3 less often now. I know when it was first launched,
43
people are getting really annoyed with these, but it sometimes does that, especially because there
44
is no actual dialogue being spoken, but let’s play these. So this one was VO2,
45
the old man in an Irish bar saying, hi, I’m John. I love VO3.
46
I don’t know what that last bit was, what this bit in the middle here is. And that last bit,
47
it just says something. Oh, so that was VO2, but it’s a really nice quality image. If I make this
48
bigger here, you can see, look how realistic this is. And even his lip movements are quite,
49
although we can’t hear anything, his lip movements aren’t really blurred. They’re
50
realistic. It moves with his mouth, even the crease down here, how far AI tools have come with
51
video. Really nice. Really nice. So now let’s go to VO3. So this one, because it’s VO3, the main
52
difference should be. Now you’ll see later, we talk about character consistency. These two guys
53
look completely different and the setting looks different because I didn’t prompt with any details
54
for the background or anything like that. We’ll get to that later, but let’s have a listen to this
55
and watch it. Hi, I’m John and I love VO3. All right, let me make this full screen and let’s
56
discuss. Let’s play it again. Hi, I’m John and I love VO3. All right, let me close it down. So that
57
looks really realistic when he’s talking. Let me just mute that a second and play this. Look at his
58
mouth when it moves. If I just let it play back. Nice. Really good. Not blurred. His teeth are
59
fine. That’s often something that skews slightly of AI. Even the guy in the background here doing
60
the, uh, washing the glasses by the looks of it. Looks really, really nice. Now there was a little
61
bit of background music given, and also he says a VO3 instead of VO3. Maybe I’m saying it wrong.
62
Maybe it is VO3. Um, but you can also do, we do that in prompting later. If something is said
63
phonetically slightly different, then you can spell it out. Like I would put in the prompt,
64
something along the lines of in brackets, like said V O three like this or something, right?
65
However you say it’s phonetic and we can use AI to help us do that later. But they’re the
66
fundamental difference between VO2 and VO3. Quality still seems really, really good. And
67
also I can do things like when I’m downloading this, I can upscale it to 1080 and download it.
68
And I can also do the same thing on VO2. So there isn’t that difference. You’re going to
69
see some other things. I’ll talk about it when we talk about the packages next and then what’s
70
available with this. And you can think about, do I actually really need VO3 or could I just have
71
VO2? Because yes, there is a price difference between getting access to be able to have this.
72
You could of course be just using VO2 and then be using another tool like 11 labs or any other,
73
any other tool that allows you to create voices. And you could just put that over the top here,
74
but it wouldn’t sync perfectly. You could then do that, but it would skew the mouse slightly,
75
I think. So it is good to have it all in one, but it all depends on your budget.
76
So I put those both out there at the same time. They took the same time to generate,
77
although it does boast the VO3 is faster. Perhaps if it didn’t have audio, it would
78
have been even quicker. Or if we were using a different quality, like VO3 quality, we use
79
VO3 fast there is here. So maybe it would be slightly quicker there, but we, that’s the
80
differences. And that is what VO is, what VO3 and VO2 are and the difference, a nice example there.
81
So now you need to know really how to get VO3, how to get access to this, be it you’re going to use
82
flow or somewhere else, but access to VO3. And if you, you can work out then from budget and
83
everything else, whether you need VO3 or VO2, but VO3 does have some amazing capabilities.
84
So if you’re on the space to make really good AI videos and with audio background, noise, music,
85
as well as synced voices with amazing lip sync, probably the best lip sync of any AI tool
86
currently, then VO3 is your one. So let’s talk about getting VO3, how you get it, the different
87
plans, and even stuff like the costs for credits, how much you could create with these plans.
88
Let’s talk about that next.
— Veo 3 Access Made Easy: Plans, Pricing, and Credits Explained —
1
So, getting access to VO3. Now, actually, a lot of people are a little bit confused
2
about this because the search is overpopulated by other sites. I’ll show you that in a moment,
3
but I’ll show you exactly how to get access and what you need to get access to VO3 on
4
some, this is available on some and on the other, but on a limited and then not, and
5
then different prices. I will show you all of that. And then I’ll also show you with
6
credits how much you can actually get based on your plan, how many videos that will create
7
you of what type, varying amounts. So, the first thing we need to do, let’s head over
8
to Google. Now, the easiest way to find how to get it, let’s just type in VO3 and you’re
9
going to see what I was saying here. It populated by sponsored posts. Obviously, everyone is
10
trying to jump on the bandwagon of VO3. So, and also depending where you are in the world,
11
you’re going to get lots of different sponsored posts. And then here’s an old post here about
12
this, but you see, you’ve got DeepMind. Even if I click on DeepMind and go VO, this is
13
the actual site for it right now. And you can try it in Gemini or you can try it in
14
Flow. So, look for that one, deepmind.google.com slash model slash VO, or just search it. And
15
you can, you can also search for, if you search VO3, let me do this, packages, VO3 packages.
16
I can search that again. There’s some sponsor stuff at the top, really annoying. And then
17
you’ve got Google One. This is your Google AI plans and features, which will get you
18
through to this page right here. But if you were back on looking for VO3, like we did
19
just now, if I go to here and then you can say, okay, I want to try it in Flow, try it
20
in Gemini. This will say, sign up for it if you’re not already signed up and you can click
21
through and it’ll give you the packages, which looks something like this. So, let’s take
22
a look at these packages. Now, of course, from when I’m recording this to when you’re
23
watching and depending on the location in the world you are, you’re going to see a different
24
price. For reference, I’m on Google AI Ultra. There’s also Google AI Pro. If you come down
25
here, you can see lots of different information about it. And also there’s a trial for this
26
right now. Of course, depending on the time you’re watching this, depending on the location
27
you are in, you can see right now I’m seeing RM. That’s probably not your currency that
28
you’re in where you are in the world. I can tell you what the price for this is. So, normally
29
in a lot of countries right now, I can see this AI Pro model going for about $18.99 as
30
a sale or something for a first month and then upwards and it goes up. But again, it’s
31
going to change over time. And right now, the AI Ultra, this is equivalent in US dollars
32
to $119.99, I think it is, for three months. And then it goes up returning to $249.99,
33
I think it is, or $249. And also in British pounds, I think it’s actually the same as
34
the dollar amount. Don’t quote me on that because of course, these will change over
35
time, all the time. So, you go on and take a little look at what they are. Now, this
36
is for three months, this price. It’s an offer they’re having. I know when they first launched,
37
it was just 249 and then they’ve done this slightly cheaper 119 for three months. But
38
I don’t think once you’ve signed up for that, you can cancel it. You’re going to pay for
39
three months, month at a time, but you’re subscribed in for that amount. It says you
40
can cancel anytime, but I’m not sure. It’s not very clear in the T&Cs whether or not
41
you can cancel before then and still only pay the 199. But there isn’t the option right
42
now to just buy one month at 249. So, these are the plans. Yes, not that cheap, not that
43
cheap on Ultra, a bit cheaper on Pro. Now, the main thing you’re trying to look for here
44
is can you get access to VO3? Now, this wasn’t previously available on this plan, but it
45
says, yes, there is limited access to VO3, but you have a lot less credits. And I’ll
46
talk about that in a moment on this plan and availability with VO3. For example, you can’t,
47
let me go into it real quick. You can’t, when you’re making stuff right here, you can’t
48
have ingredients to video without an Ultra plan, which is, we’ll get to it later in the
49
videos where you put in objects and you want certain things in your video. And you’re going
50
to be very limited on the amount of videos that you can create. So, these are different plans.
51
Is it cheap? No. The other way you could do this is there are other obviously AI tools,
52
but you’d have to create your video. Then you’d have to get music for the background,
53
for example, and cleared. And you’d have to also then lip sync it. Maybe you need a different
54
tool to lip sync it or the same as your video tool. And you would need to also get the audio
55
created. You’d need to make that with another AI tool and then put these all together. So,
56
if you start adding up your tools, it’s probably still slightly more expensive, but it is the best
57
lip sync and all in one tool available right now. Is it for everyone at this price? No.
58
But if you’re going to be full time on making AI video, it is pretty much the number one in the
59
market as we’re recording this. There are other things, WISC, I’ll get into show you that later.
60
There’s the number of credits you get. We talk about that now. Lots of other things that you
61
get inside this, including lots of storage because you’re making lots of video and everything that’s
62
available in Pro here. So, 12,500 monthly credits versus 1000. Let’s talk about that and what
63
exactly are they and how many does it get me? So, if I’m creating a video, let’s say I’m creating
64
a text to video and I have my example in here. Now, if I go over to my settings right now,
65
I could change this to VO2 and you see the fast one. You see, it’s only going to do 10 credits.
66
Let’s compare that to VO2 quality. It’s going to be 100 credits. Now, what’s the difference between
67
fast and quality? Exactly what it sounds like. One’s done very fast, which you could then work
68
on again and again, and one’s really deep quality, but you might not be what you wanted and you have
69
to recreate on 100 credits. So, that’s completely up to you. There’s also VO3 fast, which is 20
70
credits. Okay. And there’s also VO3 quality, 100 credits. So, it’s quite often quite good to be
71
using VO3 fast and then you could always remake that and upscale that, et cetera, et cetera.
72
Not use a lot of credits because you’re going to see that if we have, for example, I’ve got,
73
let me click on my one here. So, 11,790 AI credits remaining for this month. So, if I had 1250,
74
I know that if I was creating 100 credit video, then I’m going to be able to make 125 of those
75
videos, depending on the number of projects that you need and want, et cetera, that might not be
76
enough. But if I’m doing 20 credits, then I can five X that. Or if I’m only using VO2, then of
77
course it’s same thing. I don’t know why if you’re using VO2 quality, 100 credits or VO3 quality,
78
100 credits, why you wouldn’t just use this. They’re the same video model. In fact,
79
VO3 is meant to be better. One has sound and can do lip sync talking and speaking too,
80
and one can’t. So, you might as well be using VO3 if you’re doing it for quality. Same to be said
81
for the fast models also. So, maybe they’re just wanting to push people onto this and have the
82
plan to be able to use more of this, or they’re really behind how good the VO3 is compared to VO2.
83
And we will continue to compare those. So, depending on what your projects are,
84
if you need to create whole scale movies, like if I’m using Scene Builder and I’m creating,
85
here’s one I made of Adam and Eve. If I just scrub through there, you can see they’re going
86
through jungle and then Eve runs. I’ve done that by adding right here, and I could either extend,
87
or I could jump to, and I could add it in. And then the same thing, I’m going to be using
88
my credits for that. So, you can start working out, if I need a scene and the scene’s got
89
10 on here and I’m using quality, that’s going to be 1,000 and I’ve got 12,500 credits for that
90
month, how much that’s going to cost you. Or if I’m using VO2 only, or I’m using fast,
91
how much that will cost you on credits. So, you need to start doing some maths and working it
92
out. Because if you’re making a whole film, you’re making a whole short film, you could easily be
93
using. Plus, don’t forget, you’re not going to be able to use every single video generation you
94
create, because there’s going to be something like, that’s not what I wanted at all, that is
95
incorrect, or it was skewed, it didn’t say what you wanted to say. So, start thinking about one,
96
for budget, did it work? Was it exactly, is it in your budget for doing this, for the work that you
97
need, is what I meant to say. And then, does it work with the amount of credits I’m getting
98
for that also? Is that enough? If I was just making shorts for social media, and they’re
99
really simple, funny stuff, like we’ve seen lots of, then of course you can, and it probably
100
suffice. But if you’re making bigger projects, then maybe it won’t, and maybe this isn’t the
101
tool for you, or you would need to be topping up and buying more credits. If I click here,
102
and I go add AI credits, you can come on over, and I can see that I could buy 2,500 credits more
103
for 23.99, this is pounds, so I think it’s probably the same dollar amount, if not, that’s
104
$20, and 5,000 credits, and 20,000 credits, so a little bit more. So, if you’re using this
105
professionally, then it’s in your budget, then fine. If not, and it’s just a hobby thing, then
106
really think about whether this is the tool for you, or whether you’re able to use VO2 and be
107
slightly cheaper, you’ll be doing the same thing, VO3, VO2, whether you’re creating, you just
108
wouldn’t have audio lip sync with that. Now, of course, also, this isn’t available everywhere
109
right now, maybe when you’re watching this, it will be available everywhere, and most people
110
watching this course, it will be. VO3, VO2 is available just about everywhere, I think,
111
but VO3, and the ultra plan that allows you to have access to VO3, it’s currently available in
112
73 countries. Again, check, this might be more. If I click that, it opens this page,
113
and I can go down, and I can actually see exactly where these are. So, all these countries have
114
access to it. I’m not sure, obviously, where you’re watching from, but you can go on there,
115
and you can check right there. I just searched for where is VO3 available, and it was the main
116
result from Google on there. So, if your country is not listed here, then it’s not available right
117
now, but when you go onto Flow to sign up, when you come onto here, and you’re like,
118
I want to sign up for Flow, and you’ve found it, and you’re going to try and use it and sign up,
119
it won’t allow you to do it in your region anyway. It will say it’s not available,
120
coming soon, and check back. And yes, a VPN, I know people that have been in these countries,
121
a VPN won’t necessarily work. Google knows where you are. They’re really good at knowing that,
122
or you’ll use it one time and pay the money, and then next time you log in, it knows you’re not in
123
that country location, and you have paid and won’t have access. It even says if you go from one
124
country to the other that does and doesn’t have access, you won’t have access until you return
125
back to that country. So, it’s not worth the risk. You can try, but I know a lot of people
126
that have tried and failed. So, that was getting access to VO3, and what the plans were, and offers,
127
and also the credit costs. So now, there are more than one place you can use VO3. You could use it
128
inside Gemini and Flow are the main ones, or there’s some other places too. So, let’s talk
129
about actually where you’re going to use VO3 and the advantages of using one or the other,
130
before we then get into actually opening Flow, prompting text to video, frames to video,
131
ingredients, and actually making a scene, a whole scene builder. We’ll do that after we cover
132
the next kind of basic thing, which is where are you going to use VO from Google.
— Using Veo 3 to Generate AI Video: Gemini vs Flow with Examples —
1
Now, I need to show you how to get access to VO3. So remember, we spoke about a couple
2
of lectures ago, VO is the AI video tool by Google. And where are you actually going to
3
use this tool? Because it’s available in multiple locations. Now, lots of you might be familiar
4
with Gemini, you use it as an AI tool for anything from images, perhaps, or just text
5
based queries, questions, anything like that. If you have access to the plan, so if you
6
have signed up to one of these plans that we were speaking about earlier, one of these
7
pro and ultra, then you’re going to see on Google Chrome, this right here video, let
8
me compare that to if I and I keep saying, Chrome, I mean, Gemini, when I say so here’s
9
Gemini, where I’m signed in, I’m not signed in here on another browser, you can see that
10
I don’t have those options available. Whereas if I’m signed in, I can now see I have canvas
11
videos. So this is using VO3. And I can see right here because of my plan, VO3 preview,
12
bringing ideas to life eight seconds video with sound, describe a scene and add details
13
like the visual style and background music. So we can use it inside Gemini. I can also
14
use it inside flow as we’ve spoken about before, I’ll get to this page in a moment,
15
really good information about it. And this is what flow looks like in the next lecture
16
or show, I will actually open up flow as if we’re using it for the first time, because
17
I think it’s the best place that you’re going to have to access VO3. And to create video
18
is the workspace that’s meant for using it. And it looks something like this. So you can
19
also use VO2 though three is not available right now in whisk. Whisk is a really good
20
platform. I use it primarily for you can use it to create images, add in different elements
21
and bring them together to get an image if you want to then create video from image or
22
I use it for this when I drag in an image like here, I can actually get the entire proper
23
text prompt by Google product what they think this text is to use in my text to video when
24
I’m inside flow to get my video done. So I will get into that in a moment because we’re
25
going to talk about prompting and the best way to get prompting. And I think whisk is
26
the best place. There are some other places you can get like workspace and I think Canva
27
actually use VO3 now too. And if we just search like we did before, if I search VO3, you can
28
see there’s lots of things that come up here. People license it for different companies
29
to be able to use it inside their product. But Google flow is where or labs.google slash
30
tools flow flow is the tool meant to use this but let’s compare them because perhaps you
31
want to use Gemini and it’s all it is. Let’s get the same prompt in here. So and then I’ll
32
show you the difference and why I think we should be using Google flow. So I use the
33
prompt here and this isn’t a great prompt. We will use tool like I showed you whisk later
34
a woman stood on the beach looking intensely into camera at us. She is worried panicked
35
looking around her. She is white age 30 with dark hair wearing a torn red hoodie. So let’s
36
generate that. I’m going to be inside here video and it’s generating with VO3. You can
37
see if I hit that right there. Also, if we’re going to flow, let’s go new project and let’s
38
just paste in the exact same in text video right here and we’ll get into using this tool
39
next and I’m going to put these here side by side and you’ll see why I might use one
40
over the other. So let’s go back to Chrome and wait for that or error that I’ve made
41
here. This is good. I do this on videos because this happens a lot with people and they get
42
really frustrated. I’m actually in here CVO too fast sometimes and automatically it is
43
often default set to VO2. You need to come into settings right here. Click this down.
44
I’m going to use VO3 quality right there and I’m going to just run the same prompt again
45
and hit that. Now it’s going to be prompting there and we’re still going Gemini, which
46
it says three to five minutes. Often it’s quicker than that, but let’s compare these
47
including the time it takes side by side. We can see that already that VO2 has completed.
48
If I look this yet, definitely got a woman exactly as I described it on a beach looking
49
worried and that’s so realistic. So good. I don’t have any sounds with it, but let’s
50
take a little look when it generates on here already 12% and over here we don’t get a percentage
51
which is always frustrating. Whoever invented the loading bar for sites is a genius because
52
otherwise we’d all just be thinking nothing was happening and working, wouldn’t we? So
53
let’s wait for this to generate and I’ll show you. So still generating inside Gemini, which
54
is funny because it’s using VO3 exactly the same as flow and flow has just finished. It
55
took about a minute and a half or so, I guess. Let’s take a look at this. I guess compared
56
to this one looks more like it was filmed, but it could have been like, I don’t know,
57
a student film TV production. This looks like cinematic movie for sure with the grading
58
on it. Okay, let’s have a little listen to this. Wow, really nice. It’s got a slow zoom
59
in and I didn’t give it any. I didn’t give it any prompting with the camera. It’s got
60
her panting looking around the noise of the ocean behind her. Really, really nice. Wow.
61
So I’ve been waiting here over 12 minutes and it still hasn’t generated the video inside
62
Gemini. I was going to basically show you that once it’s generated, there’s not a lot
63
you can do. I can download it. I can keep it. There’s not a lot I can do. Perhaps you
64
want to use it if you love using Gemini and that you are just making clips you don’t need
65
to extend or do anything weird, etc. But I guess more to the point, it’s not a great
66
place for you to be creating your videos. I’ll keep this going and maybe it’ll come
67
back by the end of this lecture or future one and I’ll show you for reference. But the
68
main point is to use Flow. And if you’re using WISC, then WISC supports VO2 right now, not
69
VO3 as yet where I can animate the images that I want to make right here. I can animate
70
it, but it’s using VO2. So if you’re using that on your plan, then of course you can
71
use this. Gemini, not great for using it. So they created Flow and you should be using
72
this because now I’ve got the option to not only download this if I want to, I can also
73
do things like if I want to go Scene Builder and perhaps I want to then add another scene
74
to it. So if I click this actually and go straight to scene right there, I can have
75
this in here and I could then I could extend this clip or I could grab another one. I could
76
say extends it or it jumps. This uses VO2. I’ll get to this later or jump to a whole
77
another shot and it can use this as a reference. So it knows what I’m talking about for continuity
78
between etc. So basically use Flow. It’s created for the filmmaker. Multiple places you can
79
use it. I’ve shown you Gemini. Also, if there’s third party, if you’re using it on camera
80
or whatever, but they’ll have their own tools to be able to use it. Everything is made inside
81
here because you could be using ingredients from in here. I could add in a picture of
82
a Coca-Cola can, for example, and say the woman pulls out a can of Coke and drinks it.
83
It could be a funny advert like that or something. Or I could use frames to videos. I could basically
84
have this in a certain area and say from this frame, create a video. Loads you can do here
85
and we get into that later in the course. But we’re going to be using and I want to
86
show you the reason because some people will ask why we’re just not using Gemini here.
87
This is exactly the reason why. Maybe I’ll refresh this and see what’s happening. Oh,
88
my video is ready now. Just frozen on there. OK, great. Maybe it was done in time. Then
89
I don’t want to say it wasn’t. But here’s what I’ve got here. Great. And it’s using
90
audio using sound. It’s actually really good. It’s not quite as cinematic as when I used
91
it in here, but there could be the generation, not Gemini itself. Of course, it’s using VO3.
92
loaded for ages. Maybe it was done relatively quick or the same time and for some reason
93
it crashed and needed to refresh. So do that. But there’s nothing I can really do here,
94
is there? If I mean, I can go to more and I can listen to it, report it. I can share
95
it. I can do it again, like it and stuff. But I can’t as a film creator be trying to
96
make videos and multiple of these shots or make any kind of narrative not inside Gemini.
97
That’s not what this is for. So flow is the one. And that’s what we’re going to concentrate
98
on in this course, because it is a tool meant for this by Google. So let’s go in and the
99
next lecture, which goes straight from the beginning, open up flow for the first time
100
where everything is, what it is. And then we get into some prompting, text to video
101
frames, ingredients, scene builder. You’ll know after the next section of lectures exactly
102
how to use these tools and what’s available. So you can start creating your own videos with VO3.
— VEO 3: Opening Flow for the First Time: Interface and Layout Overview —
1
Now, let’s open Flow for the first time
2
and see what it looks like and get
3
to know the tool inside and out, where
4
everything is so familiar.
5
Now, we can start creating.
6
So, the simplest way, if you went in
7
and searched Google Flow here, Google Labs is
8
where you are, and it’s already going to
9
bring me to Flow’s page right here.
10
If I log in, you see this Create
11
with Flow, or if I come over to
12
here, I can go to labs.google, and
13
it will come up with everything they have,
14
WISC, which we spoke about earlier, for using
15
that is how I get my prompts best,
16
and also Flow and other things.
17
So, I can actually click Flow right here,
18
launch Flow, and it will open up the
19
same page.
20
So, just sign in with your Google account.
21
If you don’t have one, sign up, and
22
you’re going to be obviously getting one of
23
the packages that we spoke about earlier.
24
So, let’s sign in with Google right here,
25
and then once you sign in, it’s going
26
to start loading up, and you’re going to
27
see any previous projects that you had.
28
So, here’s the man we had the last
29
time.
30
I can just scroll up like that, and
31
I can see there’s the woman we did
32
on the beach in the previous lectures, and
33
I can start scrolling all the way through
34
here.
35
So, I can click on one of these
36
if I want to, and start using that,
37
or I can click New Project to start
38
a new project, and that’s how we’re getting
39
to start creating something.
40
But let’s go over the page a little
41
bit because there’s a little bit more to
42
have right here.
43
So, Flow TV is basically, you’ll see this
44
in a lot of AI tools.
45
It might be called Explore or something like
46
that.
47
If I click it and open it, it
48
opens a new tab, and I can start
49
having a look at what other people have
50
created.
51
I can just scroll up, and I can
52
have a look.
53
They call this one Dream Factory.
54
Oh, that’s nice.
55
Look how realistic that is, the water.
56
I’m always super impressed by this tool and
57
the realism inside it.
58
Really good.
59
Wow, and it’s dripping and created the word
60
Drip.
61
That’s really good, really advanced for an AI
62
model.
63
So nice.
64
Let’s have a look at another one right
65
here.
66
Okay, people swimming around in a circle.
67
Now, I don’t know what this person prompted
68
for this, and the great thing is, I
69
can say, show me the prompt, and I
70
can see right here, here is the prompt
71
of what they’ve got.
72
Top down shot, players in the sand, colored
73
circles, soccer field, okay.
74
So if I wanted to, I could copy
75
this, and I could use this if I
76
wanted to, and that’s how you get to
77
know really the prompts that people are using.
78
Use it for yourself, change it or whatever,
79
or just for a learning tool.
80
It’ll also show you if they use VO2
81
or VO3.
82
I can either watch this like this as
83
a top down flow through to go up
84
and down, or I can see this whole
85
thing in a grid right here, and I
86
can start looking through videos and see what
87
it is that I want to play.
88
These are nice.
89
Yeah, really nice.
90
It is slightly frustrating that I have to
91
hover to see what they are before I
92
can see them all.
93
Maybe they’ll change that soon.
94
Oh, that looks like a nice one.
95
Let’s have a look at this car.
96
Getting wheels to move forward.
97
Oh, and there’s fire at the front of
98
the car.
99
That doesn’t look safe, and around the wheel.
100
Okay, great.
101
Yeah, car’s burning wheel.
102
Okay, nice.
103
So that’s Flow TV.
104
Just close that.
105
It opens up a new tab, so it
106
hasn’t interrupted anything here.
107
Then I can go and I can just
108
turn the volume down for things here.
109
This is Discord.
110
If you’re unsure what Discord is, think of
111
it as a platform.
112
A lot of you might have used this
113
on other AI tools where they’re very popular.
114
I can open up Discord.
115
Let’s do it.
116
I’ll open this up, and there’s a Google
117
Labs Discord where there’ll be either loads of
118
information about it on the side and lots
119
of people having chats about it.
120
I can sign in, and now if I
121
come into here, I can see there’s lots
122
of things about Labs announcement, rules and guidelines
123
about Google Labs.
124
There’s communities, so you can come in here
125
and you could be talking to people.
126
People will be showing you what they’ve made.
127
It’s a great way to see what people
128
are doing.
129
Nice, like spreading liquid diamonds.
130
Oh, that made my back shudder.
131
Okay, and there’s loads on here.
132
Lots of people talking about WISC, and then
133
here’s Flow, Flow Announcement General, Flow Prompts.
134
This is a good one, and we’ll talk
135
about prompting soon.
136
People about getting ideas or sharing how they’ve
137
got prompts for certain things and what the
138
best methods are.
139
People sharing that.
140
So it’s a good place for education, if
141
you like, from Discord.
142
So the other things on here are there’s,
143
if I click this, it opens up the
144
Help Center, and there’s loads on here.
145
What is Flow?
146
Where is it available?
147
Like a nice all-in-one center.
148
How does Flow work?
149
Capabilities.
150
What are the credits?
151
Which we’ve already discussed most of this.
152
How to use VO3.
153
It’s a very short description right here.
154
So we’ll go through this on the course,
155
obviously.
156
So there is the Help Center available at
157
the top.
158
I’ve also then got some other options on
159
exactly how to do this.
160
Learning Flows.
161
We can actually go to a Learning Center.
162
The thing is going to be hosted on
163
YouTube.
164
Yeah, of course, Google product.
165
And they’ve got their own video on how
166
to use Google Flow AI.
167
There are also hundreds, not thousands of videos
168
about Flow, but you’re in this course to
169
go through it step-by-step.
170
Nicely structured, but it’s always worth a watch
171
from anything official from Google for that.
172
And then over here, if I click on
173
Ultra, I can see things like how many
174
credits I’ve got to buy credits, my membership
175
details, if you’re changing your billing or whatever.
176
Also your library of anything you’ve created there.
177
And then I can either see this as
178
a scroll down or into grids for my
179
project.
180
So that’s the overview for this.
181
Let’s go into new project right here.
182
Let’s click this and it’s going to open
183
up.
184
Here’s our canvas, if you like.
185
And again, I can either see this like
186
this or more, more, more.
187
So over here, this is your prompt bar.
188
Now, if anyone doesn’t know what a prompt
189
is, we’re actually getting into that next.
190
Think of it as instructions.
191
You are giving the AI model for what
192
it is you want to see visually in
193
a video.
194
And there’s a very, I mean, there’s no
195
right or wrong way really to do this.
196
And even with the same prompt, you’ll get
197
a different result each time.
198
But there is more or less a best
199
way by practice to get good results for
200
this of what’s needed.
201
More detail, less detail, conflicting detail, et cetera.
202
And that’s because we’re in the text to
203
video.
204
So I’m going to add in text and
205
give it video.
206
I can also do frames to video, which
207
means I can upload a frame here, frame
208
here, my start frame, end frame.
209
I want to go from this one to
210
this one or just upload a start one.
211
And then I want it to do something.
212
I can either prompt it like I want
213
the camera to zoom in, pan left or
214
right here.
215
If you click there, here’s where I’ve got
216
jib down, dolly, pan right.
217
And if you don’t know what these are,
218
they’ve actually got examples of them tilting down,
219
tilting up, truck left, truck right, pan left,
220
orbit right, jib up, jib down, dolly out,
221
dolly in examples.
222
So I could be using that, for example,
223
I want the dolly in and give my
224
example, a woman sat on a beach looking
225
distressed and I want it to dolly in
226
towards her face.
227
Perhaps I upload the start of an image
228
that we’ve created.
229
Either you’ve created it inside Gemini or another
230
tool that you have, or it’s a video,
231
a picture of yourself and you want it
232
to do this.
233
You’ll see later when you’re using these frames,
234
whether they allow you to use VO2 or
235
VO3, it will come up with some warnings
236
here.
237
Some things are available in VO3 and some
238
will automatically put me back to VO2.
239
You can actually see a lot of that.
240
That’s one of the main questions right here
241
that gets asked.
242
I think it says something like, why did
243
I get switched to a compatible model?
244
And it will show you what’s available for
245
VO3, VO2, but very short answers from Google.
246
So we’re going to go through them and
247
you’re going to see what’s available and what
248
isn’t.
249
Now, right here, I’m in flow and I’m
250
on here’s the day of what we’re creating
251
this.
252
And then also there’s something here called Scene
253
Builder.
254
So you saw me click on my image
255
and I can go Scene Builder in the
256
corner, or I can click Scene Builder here.
257
And this is basically where without having to
258
previously or in other AI models, you’d create
259
one video and then create another one and
260
another one and another one.
261
And in your own editing software, you’d put
262
these together.
263
But what this is basically from VO3 is
264
able to put these together and create a
265
whole scene all the way across with consistency.
266
Because if you’re extending or jumping to, it’ll
267
take the previous scene you have and use
268
it as reference for the next one.
269
I can create a whole scene with AI
270
video right here.
271
So if I actually go back to my
272
shot right there, you can see that if
273
you already have video on here, I can
274
click Add to Scene right there and I
275
could start adding and playing with that.
276
I can also do things like flag it,
277
delete it, and I can also go full
278
screen if I want to view this.
279
So if I add that to Scene, it’ll
280
populate in exactly the same as if you
281
went to Scene Builder and it’s just already
282
in there.
283
And then I can add to my scene
284
here.
285
I can either jump to, meaning it has
286
another shot.
287
Maybe I say jump to a shot from
288
behind this man and it moves in towards
289
his head as if we’re point of view
290
of someone walking into the bar.
291
Or I can extend this.
292
And again, there are limitations or models it’s
293
using VO2 and VO3 for some of these.
294
We’ll get into that in a bit.
295
But here’s where I can extend the shot.
296
So I wanted to talk a little bit
297
more, talk for longer, say something else, laugh.
298
I can extend the shot as it is
299
right here.
300
And that’s pretty much the entire layout right
301
there of Flow.
302
It’s not that scary once you come in.
303
It’s actually very simple.
304
There aren’t things like in painting and doing
305
this other stuff and things like that.
306
So it’s not very scary.
307
And I hope that explained it pretty well
308
and you’ll be comfortable with using it.
309
So the main thing you’re going to want
310
to do, this is mainly a text to
311
video model, especially VO3 you’ll see soon, as
312
opposed to an image to video, which some
313
other AI tools are.
314
So we need to understand about text prompting.
315
It’s a real art.
316
It’s really needed to nail that down and
317
get that perfect because it’s going to really
318
dictate your results that you have.
319
So let’s do some prompting practice in the
320
next one.
321
I’ll show you what the best model is.
322
I’ll even show you a way to cheat
323
to be able to get these done, created
324
for you in various ways.
325
And then we’ll go and test some and
326
see what our results are done.
327
So let’s talk about prompting.
— VEO 3: Prompting (Important): How to Create, Automate, and Optimize Prompts —
1
Now prompting, this is a very important lecture.
2
This is your main tool or the main
3
driving factor behind how this tool gives you
4
the best results that you want.
5
The better your text prompt the more accurate
6
to the vision you probably have inside your
7
mind for what it is you want to
8
create.
9
Now there are some other little hacks and
10
ways around this but I’ll pretty much go
11
over what it is you need.
12
So if we’re starting a new project and
13
you’re doing text to video then you’ll need
14
a great prompt.
15
Even if you’re in Scene Builder and you’re
16
adding to the scene, adding a jump to
17
or extending, then you’ll also need great prompting.
18
Now I’ll bring up a slide and I’ll
19
put over what you have to think about
20
and what I use.
21
There are certain points to think about.
22
Imagine you are describing the image for the
23
scene, the starting image of your scene inside
24
your mind.
25
So the first thing I think about is
26
who is in it, my character.
27
Now who is my person, what are their
28
gender, what is their age, what do they
29
look like, colour their hair, what is their
30
face like, are they really wrinkled, are they
31
old, are they young, what are they wearing,
32
how is it being worn, how are they
33
stood and you can even use emotive language
34
like are they confident, are they shy, are
35
they hunched over, are they smiling, frowning, happy,
36
sad etc etc.
37
That’s your person you need to think about.
38
Now the next point you need to think
39
about is your background.
40
Where are they, what is the location, is
41
it a beach, is it a city scene,
42
which city is it, is it sunny, is
43
it daytime, is it nighttime, is it raining,
44
is there an eerie feeling over it, using
45
emotive language there again, is it a rundown
46
area, is it a new area that they’re
47
in, what does it look like and really
48
describe the scene.
49
Perhaps there’s a colour grade to mention right
50
here, is it kind of washed with a
51
sepia type tone, is it bright sunny, is
52
it cinematic dark and dingy.
53
Be thinking about that and of course because
54
we’re on VO3 we’re also going to talk
55
about the music, is there background music, what’s
56
it like, are they speaking, what are they
57
saying, you have to type in and the
58
person says and then put in what they’re
59
saying but how are they saying it, what
60
is their voice like, is it raspy, are
61
they young, do they have an accent and
62
that’s also when you’re putting in someone to
63
start with, if they are for example Eastern
64
European or are they American, British and it’s
65
going to dictate what they sound like and
66
how they sound.
67
All these things to think about and then
68
to finish off I like to once again
69
add an emotion behind it, is the video
70
itself, is it scary, horror, is it funny,
71
is it a comedy and that will really
72
dictate what the overall feel of the video
73
is like as well.
74
Now they’re my prompting points that I put
75
on here, there’s not a lot of information,
76
if I go on to labs there’s not
77
a lot about prompting, you can find loads
78
or in the Discord you can come on
79
to actual prompts in Flow and see what
80
people are doing and lots of information online
81
as well as watching their video that I
82
told you about to mention but not in
83
great detail, that’s why I guess you’ve taken
84
this course and I’ll explain what it is
85
that I use.
86
There’s a hack around this, now if I
87
was using Google Gemini for example I could
88
say write me a prompt for a video
89
I want to create in Flow VO 3,
90
so I’m telling it where I want this
91
for.
92
The scene is an Asian man in his
93
50s sat on the steps of a building,
94
he’s homeless in ruined clothing, the scene is
95
sad, cinematic, daytime but overcast and he says
96
in a raspy voice of an Asian Indian
97
accent I used to run this town.
98
So I’m going to tell it to do
99
this, now that looks like it’s a prompt
100
in itself doesn’t it but if I use
101
Gemini and tell it hey I’m making this
102
for VO 3 then I can tell it
103
hey can you make me a perfect one,
104
seeing as you are a Google product and
105
I’m using a Google product here, tell me
106
what I need to put into here.
107
So let’s do that and then while I’m
108
waiting I’m also going to say break down
109
for me the elements needed for a good
110
prompt in VO 3 in Flow and it
111
will actually tell us if it’s pretty much
112
similar to what I just told you now.
113
Now let’s go with this, so option 3,
114
option 2, option 1.
115
So here’s a detailed and evocative prompt I
116
can use right here, if I click on
117
that you can you can see it, I
118
can see the whole thing right there.
119
I can do a concise and direct or
120
focus on the emotion.
121
Tips, experiment, try each of these prompts, mix
122
or match.
123
Okay and then I just had a give
124
me a breakdown quickly, creating a good prompt
125
for VO 3 or any advanced video generator
126
is essentially about being clear and specific.
127
So subject and character, yep that’s what I
128
said, who or what’s in the scene, appearance,
129
number of subjects, relationships between the subject, context
130
and setting, where’s the scene, environment details and
131
time of day and weather, did that, said
132
that.
133
Action movement, what is a subject doing, yes
134
very important and camera movement, is it static,
135
pan, tilt, zoom, dolly, close-up, angle.
136
Something I didn’t mention on the slide, I’m
137
going to put that on the end here,
138
is camera movement because you can use inside
139
VO 3, you saw me do that inside
140
flow, you can actually add that manually with
141
their selection.
142
I like to actually prompt it and I
143
do prompt it.
144
So let’s add the camera movement and character
145
movement onto our slide for that.
146
There’s also the style and the mood, what’s
147
it like, is it emotional, what’s the lighting,
148
color palette, audio, dialogue, voice characteristics, accent, yep
149
we said all this, sound effects, music in
150
the background, do you and do you want
151
subtitles and then some general tips.
152
So you can get the information right inside
153
Gemini, I go with that slide that I’ve
154
been generating a lot and it’s pretty good
155
and pretty concise.
156
Once again you can do less, you can
157
do more and see how it is and
158
experiment with that.
159
So let’s use one of these prompts right
160
here, I like this, let’s copy that, let’s
161
go into flow and have a little look
162
at this.
163
Take the video, paste that in.
164
A solitary homeless Asian man in his 50s
165
sits slumped in a weathered stone steps of
166
a grand dilapidated old building.
167
Nice, it’s describing the building which I didn’t
168
do.
169
His once fine clothes are now torn and
170
stained, clung in a gaunt frame.
171
The daytime sky is heavy and overcast, casting
172
a melancholic diffuse light over the scene, highlighting
173
the grim and decay around him.
174
The atmosphere is profoundly sad and cinematic with
175
a sense, he looks with a sense of
176
grandeur, he looks into camera, his eyes holding
177
a lifetime of sorrow, raspy Indian accent voice,
178
he utters I used to run this town.
179
Let’s make sure once again, they always do
180
this, let’s go to VO3, go on quality
181
I might as well and let’s run that
182
one.
183
Now I’m going to compare this right over
184
here Gemini, here’s a more concise and direct
185
version, let’s put that in and let’s compare
186
these side by side, still on VO3.
187
A sad cinematic shot of a homeless Asian
188
man in his 50s in ruined clothing, sitting
189
on building steps, overcast daytime, he says in
190
an Indian accent, I used to run this
191
town.
192
No details really about the emotion, about the
193
building, but let’s compare these side by side.
194
Okay, these are both finished generating, really nice,
195
look at this, almost in 69, it’s got
196
black bars above it.
197
Instantly I can see that this person looks
198
more of Indian, Southern Asian descent and this
199
one looks more like Oriental, East Asian descent.
200
Although I said Indian accent, I didn’t say
201
in the prompt even here that he’s Indian
202
Asian man or anything like that, seeing if
203
it would pick up on it, it didn’t
204
here.
205
It obviously thinks of Asian and thinks of
206
Oriental, Eastern Asian as opposed to this prompt,
207
the more direct, shorter prompt did pick up
208
on that.
209
So let’s watch these side by side for
210
a second, let me just pick play.
211
I am Zaan I used to run this
212
town.
213
Okay, so it’s added in some, let me
214
make this full screen a second.
215
It’s added in something at the start here.
216
I used to run this town.
217
Did he say, did he say in a
218
different language at first or just say something
219
in nonsense?
220
Obviously I don’t understand what was said at
221
first, but let’s look at the image for
222
a second.
223
We’ve got a guy in torn clothes on
224
the steps of a grand building in ruin,
225
it’s definitely emotional, it’s got the right emotion,
226
a very slow zoom into him.
227
He’s dirty.
228
He says our line, I used to run
229
this town.
230
I don’t know what he says at the
231
start, but it’s a really, really nice image.
232
Let’s close that.
233
I’m actually going to also download this and
234
show you how we do that.
235
We go upscale to 1080 and then it’s
236
not costing any extra credits to do that.
237
When it’s ready, it’ll say, hey, download it
238
now.
239
It’s ready.
240
Let’s compare that to this one.
241
This was still VO3, but it was not
242
using a very big prompt here, just a
243
kind of short one.
244
We still got a guy on the steps.
245
It still looks cinematic.
246
It’s not got that low angle kind of
247
shot in here, but let’s play it.
248
I used to run this town.
249
Wow, that’s really nice though.
250
Let me make that big screen and show
251
you this.
252
It’s still a really nice shot, probably closer
253
to what we were imagining.
254
I used to run this town.
255
Wow, we’ve got a guy, we have an
256
Indian accent definitely, and it slow zooms into
257
him and he looks up and says that.
258
We didn’t have the huge prompt that we
259
had last time, but they both generated amazing
260
images.
261
Kind of proof that prompting is very important,
262
especially if you have something very specific in
263
your mind.
264
Someone is holding a specific object and it
265
looks a certain way, but if you’re kind
266
of open to an interpretation to let the
267
AI kind of do its thing and perhaps
268
think of something or in a way you
269
didn’t think about it, you can get really
270
nice results.
271
This is a great result on a very
272
short prompt, kind of similar to our last
273
one without the extra details, but we didn’t
274
have extra details.
275
He wasn’t holding something in his hand, in
276
particular object, or his hair wasn’t styled in
277
a certain way.
278
This is really good and really nice.
279
I’m going to also download this one in
280
exactly the same way.
281
You say this and it says, oh look,
282
it says like this.
283
The last one was done and it’s downloading
284
and the next one is still happening here.
285
So that’s really nice.
286
Now there’s another way you can get your
287
prompt.
288
You can either ask Gemini like that and
289
say, make me a prompt, type it in
290
yourself following what we have discussed, or you
291
can use WISC.
292
Now I’ve touched on this a little bit,
293
but let’s take this away for a second.
294
I’ll show you here.
295
So you can either create your own image
296
in here.
297
So actually let me just take this away.
298
If I just copy this first bit, an
299
Asian man is 50 sound steps homeless.
300
Let’s copy that and let me put that
301
into here.
302
I could then generate an image with that.
303
So let’s generate this and it’s going to
304
start generating me images.
305
Great.
306
So now we’ve got our images here.
307
I can click on them.
308
I can have a look and I can
309
see what I put in here.
310
Again, it’s said Asian is Oriental as opposed
311
to Southern Asian.
312
So you could be more specific, but if
313
I wanted to over here, if it’s hidden,
314
you can do this, drag that image into
315
here.
316
And when it’s there, it’s going to analyze
317
the image, which is great.
318
It’s doing it for me.
319
Remember this is a Google product.
320
It’s under labs, Google, just search WISC, or
321
I showed you earlier how to access it.
322
You can click down here and it’s basically
323
given me the prompt.
324
It’s a Google product.
325
You’re saying, Hey, here’s what Google thinks.
326
This image is an older man with deep
327
wrinkles on his face and forehead, slightly parted,
328
short, dark, gray hair, uh, black hair seated
329
on concrete steps.
330
He’s dark colored eyes.
331
This, that, the other, the building is a
332
weathered, somewhat muted, contemptive, somber expression.
333
It’s got everything here.
334
So I could then take this if I
335
wanted to, Hey, that’s the prompt that I
336
wanted.
337
And I can put this into flow and
338
say, Hey, this is my prompt.
339
Another way is, so that’s creating your image
340
inside WISC here, a Google product.
341
If I wanted to, perhaps I already had
342
my own image that I want.
343
So maybe own an image.
344
I’ve got one right here.
345
This is a viral video going around of
346
this person.
347
It’s either a Yeti gorilla or someone in
348
a Yeti style, a costume, Bigfoot costume, I
349
think it is, playing the banjo.
350
So if you already have an image that
351
you like, that you’ve seen either online or
352
your own image, whatever it is.
353
Oh, something went wrong, fetching the media.
354
Good.
355
That happened while I’m showing you this.
356
Let’s do it again and see if it
357
works.
358
Analyzing image.
359
Okay.
360
It did it that time.
361
You can drag your image into here and
362
in exactly the same way, it’ll tell me
363
what it is.
364
A character resembling a large, dark Brown ape,
365
similar to an orangutan or Bigfoot is seated
366
in the forest.
367
Okay.
368
The characters have been thick, shaggy Brown hair.
369
What does it say about this revealing gums,
370
inner mouth, prominent brow.
371
He’s holding a five-string banjo positioned and
372
the banjo is light colored.
373
Yes.
374
The character is sitting in what appears to
375
be a fallen log or tree.
376
Great.
377
The bottom of the image though is captions
378
white that reads this.
379
Okay.
380
Let’s remove that.
381
We wouldn’t want that for our prompt, but
382
it’s showing all of this.
383
Great.
384
Good.
385
I was going to see, probably says somewhere
386
about a low kind of shot, but you
387
could, in exactly the same way, copy this
388
and paste it over into flow using whisk.
389
It’s also a great way for you to
390
get character consistency, which we talk about a
391
lot later.
392
If you’re using this, you’re going to be
393
able to have the same kind of prompting
394
each time when you are creating inside of
395
flow.
396
Although you can use scene builder and be
397
extending and have other shots and it will
398
use the previous one as a reference, but
399
those are ways to get the ideal prompt
400
for VO three or any VO really VO
401
two.
402
Also you can either use Gemini, you can
403
use whisk.
404
If you have an existing image, you can
405
also paste that image in here, right there.
406
I could drag this in and I could
407
say, Hey, write me a prompt for this
408
for VO three, use whisk or use and
409
follow the guide that we have for this
410
course for prompting.
411
Either way, sometimes a long prompt will work.
412
Sometimes a short one.
413
I’m actually going to show you next.
414
I’m going to talk to you about text
415
to video.
416
We’re going to start generating some examples and
417
we’ll go all the way from a very
418
simple prompt to a far longer, more advanced
419
prompt, and you’ll see the difference and what
420
you need.
421
If you really need loads at all, it
422
really, really depends on you, the project and
423
what it is that you want.
424
So let’s move on and let’s actually start
425
creating some video.
426
Let’s use the text to video feature and
427
test it out.
— VEO 3 : Text-to-Video – Creating AI Videos Using Text Prompts (Including Voices) —
1
Now we’re going to talk about text-to-video and then we basically go through these one at a time.
2
Let’s do frames-to-video, ingredients and scene builder.
3
So the first one is text-to-video and we’ve just did a whole prompting lecture looking
4
at WISC and Gemini to get the perfect prompt which is 99% of this.
5
But let’s play more with text-to-video because I want to show you exactly how this works.
6
I’m going to change this to the fast model for this so that we can, it shouldn’t give
7
anything different in quality really, maybe a slight bit, well we’ll see.
8
And I’m going to show you how we build this up, both this is about prompting but also
9
what do you get with text-to-video and we’ll play with some more and try and come up with
10
some great stuff.
11
And this will also be some ideas for you for creating some videos also.
12
So text-to-video, the main one you’re supposed to be using for this is a text-to-video model
13
as opposed to image-to-video model like some other AI tools.
14
Now like I once said, you could do anything, let’s do actually, let’s do a cat.
15
You could do anything you want to and I could also do this twice.
16
Let’s do a cat, exactly the same.
17
Let’s click OK and we’ve got these two running right here.
18
And even the most simple of prompts gets a result.
19
But the same prompt will get a different result each time.
20
And here are my two results, they’ve actually generated both a ginger cat but different
21
hair colours slightly.
22
One is a shot, a full-bodied shot inside a living room of some kind with a sofa in the
23
background and one is a close-up.
24
Let me just play these quickly.
25
Oh, we got some purring going on as they look around, nice, OK, and what did I get for my second one?
26
Oh, I got a meow at the start of that, oh that was the last one.
27
And then some more purring, great.
28
But I didn’t give it any more detail, oh that was nice, see the camera just slightly moved
29
over to that side when you did that and then it meowed, really nice, really nice.
30
OK, so I didn’t give it any detail about that, so let’s go a black cat, close-up, shocked
31
look on its face, looking at camera.
32
Now I’m going in slightly more detail, let’s play with this.
33
OK, now I’ve definitely got, still realistic which is great.
34
I’ve still got this, definitely got a shocked look on their face, bit of a puma kind of
35
purr going on, nice, but it definitely gave me what I asked for.
36
You can see how we’re going from that to slightly more detail.
37
Now what if I wanted to use the same prompt again, but I’m now going to say, in astronaut’s
38
costume in outer space.
39
Now we’re getting a little bit wild here.
40
I still haven’t told it the style, it still should be realistic.
41
As default, I find that this model definitely goes realistic as opposed to computer generated,
42
animated, get to that in a moment.
43
Let’s see what it generates for this.
44
OK, so I’ve got this shocked cat inside an astronaut’s costume.
45
I didn’t give any details about the costume, colour, anything else or what it looks like in outer space.
46
Let’s have a little play of this.
47
So it meows and doesn’t move its mouth at the start and then it does move it at the end.
48
OK, but you can see we can get pretty much anything that we want, anything at all.
49
Let’s do the same thing here, but this time I’m going to go in a, I’m going to say in
50
the style of Pixar animation, an animated black cat, shocked face and then in outer space.
51
OK, let’s have a little look at that.
52
I haven’t told it the camera shot.
53
You could say a full body shot, wide shot, etc.
54
But let’s go and run this.
55
Now that looks exactly how I imagined. Look at that.
56
Looks like a Pixar animation.
57
Let’s play this.
58
It even gives a sigh at the end and his eyes cut right there and then a bit of a huff and
59
the noise of the the noise of space behind it kind of is if the microphone is inside
60
the helmet of the cat. Really nice.
61
So we can do any styles we want, anything like this.
62
But we’ve text a video.
63
Maybe you don’t know what the styles are.
64
Maybe you don’t know what you want it to look like.
65
You can imagine it.
66
But what the heck is that thing called that you’re trying to do?
67
So let me give you an example.
68
So maybe what you’re thinking of in your mind is this is like paper cut animation, like stop motion. Let’s move.
69
But perhaps you don’t know what that’s called. That’s fine.
70
We can use some tools here.
71
Let’s grab that and just put that right there.
72
Then let’s go back into whisk.
73
I like to do it inside a whisk.
74
I will talk about ingredients on flow later.
75
So now inside whisk, let’s just remove this right here.
76
Let me grab my let’s move down here, my style.
77
So you see we’ve got three elements here.
78
Subject scene style.
79
I can go my style right here.
80
Now perhaps I wanted a black cat underneath subject.
81
Let’s put in my black cat and now the scene perhaps I want out of space.
82
So on the scene, let’s drop in that image right there.
83
My subject didn’t go in.
84
Let me drop it again.
85
Unsupported image format. Oh, I see. It’s an at it.
86
OK, let me just open that up and save it as a JPEG or something.
87
Export that as a JPEG. Yes.
88
And now let’s drop that in subject.
89
So now I’ve got my subject, my scene and my style. Great.
90
And let’s run that.
91
And that did not give me great results, did it?
92
Look, here’s definitely the scene.
93
They’ve not done the paper cutout.
94
So this doesn’t always work when you’re adding in all elements.
95
Let me just do this.
96
We can actually take it away right here.
97
Take this away so I can describe these.
98
So I’m going to say a black cat in outer space in an astronaut’s costume in the style of
99
paper cut out animation.
100
And I have the paper cut out right here.
101
And in fact, if I click on this, it’s going to say this image showcases a paper cut out aesthetic.
102
So if you need to know what this was called, if you didn’t know what the styles were called,
103
drag it into style right here and then click on this.
104
And it’ll tell you pretty much in the first line always.
105
So let’s do this and let’s run it. Nice. OK, perfect.
106
This this is what kind of thing I was thinking of right here.
107
So now what I can do is I can say put it in subject doesn’t really matter where I put it.
108
You can analyze the image and then it’s going to give me once again the complete the complete
109
prompt that I can use.
110
If I grab this, I can just do all the prompting right here, put it into flow where here and
111
now I can grab my style and I can put the entire thing in there.
112
So if you didn’t know what the style was, I always do this.
113
I’m like I’ve done it before and I’ve seen it in movies and I’ve grabbed the scene from
114
that movie and I don’t know what it’s called or what the angle of shots called.
115
I’ve put it right in here.
116
Just put it into whisk and it will tell you, oh, this is a low angle shot.
117
This is a drone shot, whatever it’s called.
118
This is in the style of and it will give you all the style.
119
Perhaps it’s like cyberpunk or perhaps is some kind of near realism or some kind of
120
futuristic sci-fi or something that you don’t quite know what it is, but you know the look. Drag it in. Click this.
121
You get yourself what it’s called right here and then you can paste the whole prompt if
122
you want to play with it and make the whole thing into flow and it will give us the results in a video. Nice.
123
And here is my animation.
124
Slightly less of the paper cut out than I was expecting, but you could regenerate this and play.
125
Let’s see what happens here.
126
Okay, a lion’s roar, but it sounds a bit, but you can see how people are using this
127
for videos like this, where we are having a children’s animation for YouTube or such
128
like a blowing up and they get millions of views and people are using VO three for things
129
like this and you can get a consistent feel and look if you had your prompting and what
130
it is that you want to do.
131
I can see that there isn’t actually in the paper cutout style said here, so you could
132
add that into it just to make sure you’re getting the exact style that you want.
133
But that’s text to image.
134
That’s how we use it.
135
It’s going to be standalone shots until we get on the scene builder in a few lectures
136
time where we can put these all together and make an actual movie.
137
But to get yourself familiar with text to video, any style you want, use the prompting
138
from the last one.
139
If you don’t know what it is, use whisk to kind of generate and see what it is and get
140
playing with it to make sure you get these right.
141
Because using whisk to do things like this and find out what something’s called is going
142
to save you a lot of credits because we know how expensive the program is, sort of save
143
you some money and time and a little bit of learning for yourself too.
144
So in the next video, let’s go down and let’s talk about frames to video.
— VEO 3 UPDATE (Frames to Video with Veo3) —
1
Now I’m just interjecting here with a little update lecture as things update on the tool
2
on VO3. I will of course add these update lectures to keep you up to date with it and
3
something has happened. You’re about to see in the next lecture and I will keep that in
4
because there’s still tools, still examples, good examples on how to use it but you just
5
need to ignore one thing because they’ve updated it now. So I’m about to show you, don’t text
6
the video, I’m about to show you frames to video where I can do something like select
7
a frame like I’d already selected here. There’s me right there and then I could say, man says
8
hi and welcome to the course. I’m happy to have you here. American accent excited animated.
9
Now previously and you’ll see in the next lecture when I click run right here it would
10
pop up and say hey we can’t use VO3 you have to use VO2. VO3 is not available inside frames.
11
Well now there’s been an update it is available inside frames so you can use VO3. The difference
12
is of course that now I have hello nice to meet you here. If I only use VO2 I wouldn’t
13
have access to audio. He wouldn’t be able to speak or I wouldn’t be able to say background
14
noise or the noise of a plane outside or whatever it is that I want to do. So I can put this
15
and I can put an end frame also you’ll see me and that’s why I’m keeping the next lecture
16
in you’ll see me use beginning end frames and how you can and can’t use those and what
17
really this is for. I couldn’t just have another image of a cat for example and wanted to go
18
from this image to a cat seamlessly like morphing merging. So you’ll see me use those in next
19
lectures for this one. Let’s run it and run with VO3 meaning I can have finally this is
20
so good I can have frames the image I want with speech and audio. And here are the examples
21
let’s take a little listen and welcome to this course person and I’m happy to have you
22
here. Oh that one messed up it said welcome to the course person and welcome to this course
23
person. Yeah funny right listen to this one. Hi and welcome to this course and I’m happy
24
to have you here. Nice that was good. I keep talking at the end here you just cut that
25
off and you’ll edit. Let me shut myself up. So now you can use image and with VO3 enabled
26
you can have audio. I could tell if a background noise I could tell it to say in a Gen Z style
27
with animated really animated expression etc etc. I could have obviously said camera zooms
28
in slightly here or something but just to show you the VO3 limitations that were previously
29
not able to use with frames so VO2 limitations not having VO3 audio is now possible. So let’s
30
go on to the next lecture and let’s have it where you can just see how to use frames to
31
video in different examples how to use this without obviously audio. That’s what this
32
lecture was for just a little update. I’ll keep you updated if anything else changes
33
because I’m going to show you ingredients the video as of the time of recording this
34
video ingredients does only allow you to use VO2 but maybe seeing as frames has just
35
changed maybe ingredients the video will change also and you can have audio with ingredients
36
the video. OK let’s get on with the course.
— VEO: Frames to Video – Creating Videos from Images Using Veo —
1
Now, the next part of the creative process is, after we’ve done already text to video,
2
is frames to video. And if I click this, you can see that it populates down here with a
3
plus, plus, and then also a camera motion icon right here. Now, as of now, the time
4
recording this, it might be slightly different soon. You’re going to see there’s a slight
5
limitation, but I’m going to show you how you use it and then what you might be using
6
it for and where it kind of excels and what’s kind of exciting with using this.
7
So all you have to do is click the plus icon and you can upload an image. The first time
8
you do this, it’ll pop up, giving you a tick box to agree that you’re not going to use
9
it for any harmful. You have the rights to upload any image that you have. So let me
10
just upload an image of me here. If I click the plus icon, here’s one I uploaded already
11
of me. Here’s this image from a scene that I use in one of my scenes earlier, Adam and
12
Eve figures running through the jungle or Garden of Eden kind of set up we’ve got here.
13
There’s also you can upload here or you could generate an image inside here and they’d be
14
using Imogen. So you can upload an image or generate one, describe it in much the same
15
way we do for prompting. But let’s use this. Most people you’ll be using this because you
16
want to upload an image that you already have. So let’s click upload. And here is an image
17
of me. It asks you to crop it to make sure it’s the right size that you want. So let’s
18
crop and save this. And there it is. In the end here, we could also add an end frame.
19
Let’s talk about that in a bit. I’m going to put this and you’ll see what happens. I’m
20
going to put this on to VO3 because that’s what we want and we’ve been using. And I can
21
also tell it, well, let’s not tell it anything right now. Let’s just say man waves at camera
22
and says, hello, welcome to the course. And you’re going to see the limitations right
23
here. So if I click go right there, you’re going to see a pop up switching you to a compatible
24
model for this feature. Submit again to confirm. So I’m going to go click and then you can
25
see it swiped it over to VO2 fast VO2 right here. Using image is not available inside
26
VO3 at the time of recording. But if it is, then you will be able to do exactly that prompt
27
that I’ve said in the future. When you’re watching this, hopefully that’s an update
28
coming really soon and you’ll be able to just select VO3 and you’re going to be able to
29
have talking as well. And it’s going to be lip sync talking. So all you’re going to see
30
here because we know that VO2 does not do any kind of audio. It’s going to just have
31
my lips moving and waving. And I also didn’t give it any camera direction. So we do that
32
in a moment. Okay. And here’s me at my desk. Let me click play. And there’s me moving really
33
good kind of lips moving like that. And it jumped in slightly on a shot. That’s nice.
34
My arms are moving very naturally. I’ve got all my fingers. It’s very real. Yeah, that’s
35
a really nice shot. Unfortunately, it doesn’t have speaking right now. It doesn’t have any
36
audio, but that’s probably going to come soon. So you can ignore what I’m saying. It’s not
37
available and click VO3. But that’s really good. So let’s do something else. Let’s do
38
exactly that same thing again. Let’s choose that. And then what if I want an end frame?
39
Now, normally you might have like a closer shot of this or perhaps of my hands or something
40
if you’re cutting away to it. But what if I do something really weird? What if I add
41
a completely different shot? Let’s test it and work it out. Okay, so I’ve uploaded first
42
frame is this last frame is this and it is of a black cat. So it’s obviously not the
43
kind of thing you’d normally go in first frame last frame. But let’s test this to its absolute
44
limits. I can also give a direction. This man turns into a cat. Now, because this cat
45
is in a different setting, I could be using we could have used whisk, for example, and
46
I could have put in the subject and the scene. I said this cat at this desk. Maybe that’s
47
something I’ll do in a moment. But let’s see what happens if I just say this man turns
48
into this cat. Once again, we have to be on VO. Even if I don’t have any text on there,
49
any speaking, of course, let’s do this. And it’ll say, hey, we have to change you to a
50
different model VO2. Okay, let’s do that. Whilst this is loading, let’s actually change
51
this. So let’s go my subject and let’s put in the cat and let’s put the scene. Let’s
52
put my desk in there where I am. But I’m worried it might also take my image in here. But let’s
53
have a look at this. Okay. And let’s go a black cat sat at this desk in this office.
54
And let’s go and run that. All right. So it’s given me I don’t know if it’s pulling something
55
slightly from the other illustrations that we did previously. Let’s go a black cat sat
56
at this. Let’s go ultra realistic here. And let’s run that again. I still like it says
57
learn AI video creation all in one place on the screen here. It’s got it. Okay, nice.
58
It generates me one image then come up with four. Let’s do that ultra realistic. Let’s
59
go straight on straight on shot symmetrical. Okay, nice. Still slightly animated. But that’s
60
quite nice. Let’s download that one. Let’s go back in the flow and see if the other one
61
is generated. Yeah, it is. So here’s where we had first and last shot. The cat was in
62
a field though. And the guard is desk. Let’s see what happens here. He’s talking and we
63
just kind of blur into that shot as opposed to it turns or morphs into it. That’s fine.
64
Let’s go with this. And let’s go with man turns into a cat morphs into a cat at his
65
desk right here, right here. First frame, last frame. And let’s go run. And let’s see
66
that. Let’s see. He’s talking. Oh, and his hands turned and he came into so it’s not
67
the best for using end and start frames that are completely different. And that’s not what
68
it’s for. If you are using the same shot, that’s the beginning. And then the end one
69
is closer than to use that as the start and end frame. It’s almost like using this or
70
the camera here, which the last bit I want to show you. So let’s use me here. And then
71
let’s put me into a I like to dolly in once again, look as if the cameras on a dolly that
72
wheels moving in forward and dolly forward. OK, let’s give it a little bit of instruction
73
here. Man smiling. OK, and that’s going to be on video, too, again, of course. And let’s
74
run it. OK, here’s the man. I said he’s smiling. I keep saying the man is me smiling and the
75
camera dollies forward. Let’s play this. Oh, and there’s a full here. I come in forward
76
as opposed to dolly forward. I like that. There’s a mistake had here because we can
77
actually go in. Let’s do this because I’m showing you everything here. Let’s get this
78
and let’s just prompt slow zoom in. Man smiles. Let’s separate those and let’s hit that and
79
compare them. I’d never like using these. Sometimes they’re quite good and they do it,
80
but I always prefer text prompting for it. And VO two, especially actually more than
81
VO three, it says is far better at distinguishing and understanding your prompts than ever before.
82
So this should be flawless when you text prompt for it. Well, let’s see if the text
83
prompt worked better. Oh, and VO three, you have you have done bad again. Well, this is
84
kind of a filmmaking at its greatest and exactly what it is. You’re going to have to reprompt
85
and reprompt and retry it. But frames to video are not the best tool inside VO three. It
86
is, as I’ve said before, primarily a text to video tool. And you’re able to use scene
87
builder to have your consistency. The reason people often use image to video and other
88
AI tools is because they want consistency. And it’s easier to build the same character
89
with consistent look in images and then turn those to video. This is taking that bit away,
90
the kind of missing block and able to generate from text and image and then using scene builder,
91
which we get to in a couple of lectures time, be able to keep the consistency across there.
92
So what I do think this is good for, though, I really do think this is good for if you’re
93
going to upload this, let’s upload a new image. For example, let’s upload here’s an
94
old image of London. So this hasn’t got a video to it. It’s never had a video. These
95
have never moved before. Let’s crop and save that. I know it’s that kind of four, three
96
as opposed to sixty nine. But I quite like that. Let’s see if it understands the black
97
bars and keeps them. That’s something I often fails with. So let’s upload this. And what
98
it is good for or what you could use it for is take historical images where video has
99
never existed. And for the first time since they were shot, make them move again. So I
100
could say old London bus drives down the street. Nineteen twenties and enter this generated.
101
Let’s see if we can bring this scene to life for the first time in however long since it
102
was filmed. Yep. There’s the bus moving. The camera even moves with the bus. It keeps all
103
its form. It did it more. Really nice. This could be a whole channel or if people are
104
making documentaries, then to bring images to life, historical images, because you don’t
105
have video footage of a lot of old scenes from all around the world. You can now with
106
a I’m with VO three, VO two. This is even sorry. You could be bringing scenes to life
107
that haven’t moved since they were taken a hundred years ago or more. You could bring
108
scenes to life with frame to video. That’s really nice. People before would need to have
109
footage or would need to be using a lot of professional equipment, CGI efforts to try
110
and bring this together. And now we can do it with the click of a button and bring images
111
to life with a really nice. You can make a great documentary series, a whole channel
112
around bringing this stuff to life, making it move for the first time. Really good. So
113
that was frame to video. Once again, as you obviously can see, there’s a text to video
114
platform. But the other thing we can look at right now is ingredients, which is great
115
if you’re using you need a specific object or perhaps you’re using it for promo videos
116
for a brand or something. Let’s look at that next in the next lecture.
— VEO: Ingredients in Veo: How to Add Props and Elements to Your Video —
1
The next part on this, we’ve done frames to video. I want to show you ingredients to video.
2
Again, this will not be available even if you have the lower plan. You remember earlier
3
I showed you there were the plans, there was ultra and there was pro. Pro, it will not
4
be available to you. You have to be on the ultra plan for this. I think their thinking
5
is that people using ingredients might be brands and things that want to put in something
6
specific into an image. Now I’ll show you how to use this. We can go back to VO3 just
7
to make sure and I will show you what happens here. You’re going to see again with limitations.
8
So let’s upload, let’s use me again and let’s also upload a can here of Coca-Cola. So perhaps
9
that’s your brand, that’s your product. Obviously you probably watching this don’t work for
10
Coca-Cola. Most of you watching won’t, but whatever your product is that you want, perhaps
11
it’s something for your scene. Perhaps it’s a weapon or perhaps they’re holding a tablet
12
or a phone, whatever it is. So I can say this man drinks a can of Coca-Cola and that’s got
13
my reference right there. Okay, let’s hit run. Once again, switching you to another
14
model. Now by the time you’re watching this, hopefully you can do that with VO3 and there’ll
15
be noises of drinking from the can, et cetera. But the only thing that missing here is the
16
audio. So if you’re having a product video like that, you probably haven’t got speaking
17
in there. You’ve probably got yourself, I don’t know, you’ve probably got yourself a
18
background track music of some kind, but we could compare this. For example, if I go text
19
to a video whilst this is generating, let’s go back to whisk again and let’s just remove
20
anything. I can do this and I can say, let’s copy all of this and then I can go back into
21
flow. Now I’ve got my text prompt. Let’s read through. A man in his thirties, light-skinned,
22
centered frame, wearing a cream colored jumper, smile. I’m not going to say about his hands.
23
The background, the studio light with lighting, a microphone, black office chair, a computer
24
monitor. And it says, learn AI video creation all in one place. Learn monitors, a white
25
keyboard and mouse. He has a can of Coca-Cola in his hand and drinks from it. So I’ve described
26
my entire scene and Coca-Cola and that Coke brand is obviously going to be very well known.
27
And AI model is going to know what that is. If it picks up on this amongst all of this
28
here, quite a long prompt. We could probably shorten that and then we could just have that
29
higher up, but let’s do this. And then we can compare if we’ve got ingredients and this.
30
And then lastly, I want to go back to ingredients. I want to do something a little bit funny
31
here. Let’s have this and let’s have a can of Coca-Cola again. I can say Adam and Eve
32
in the jungle. Adam turns and drinks from a can of Coca-Cola. And the AI model is understanding
33
this is Coca-Cola. I don’t have to reference it in any way. I’ve just added it right here
34
and let’s run that scene. And now let’s have a look at these when they finished. Okay.
35
That first one from the ingredients is done. This is where we uploaded a picture of me
36
and a can of Coca-Cola. Let’s play this. And you can see, I didn’t tell it about the image,
37
the frame or whatever. It’s pretty realistic at first bit. Then when it brings down, it
38
doesn’t look like I’ve drunk anything. But it is, you could use definitely that first
39
part of the shot up to there. That’s really realistic. So if that was your product, if
40
it was your own drink brand or whatever, you could definitely have this person, your model
41
you’re using drinking from your product. That’s pretty good. You can add that in. Isn’t that?
42
I mean, we’re not flummoxed by this because we’ve come so far in AI models, but even this
43
like six months ago would have been revolutionary. It’s moving so quickly. I can add in a product
44
and say this person drinks this product, but we also prompted for it, didn’t we? So let’s
45
just see what happens if we prompt for it. Okay. So obviously this isn’t me, but it’s
46
prompted for a guy with exactly the same prompt definition as I have. Let’s have a little
47
look here and see if it registers the can of Coke. Oh, there it is. An exact can of
48
Coke holds it up. Yep. Okay, great. I think I said drinks it, which he doesn’t do in the
49
prompt, but you could reprompt for that. So you can see how you can actually use, if you
50
have a non-specific, you don’t need a certain model looking exactly the same as me like
51
this one is that is exactly me. Then you could be prompting for it in the same way and get
52
a product, your product. It even says learn AI video creation only one place on the background
53
here. Amazing what it’s done. And it’s got the background purple, exactly what it’s prompted
54
for really nice. And the last one was Adam and Eve in the jungle. Adam turns and drinks
55
from a can of Coca-Cola. Funny. So we’ve got him drinking a can of Coke. Eve looking at
56
him like, what the heck are you doing here? Oh, I guess another can and drinks. Oh, really
57
nice. Okay. So obviously that’s a little skewed on the text here, but that’s how you use
58
ingredients. So you might want that, for example, if you are either using it for a product
59
video or you’re trying to get something specific with prompting and you have an image of
60
it or you found an image online and you’re trying to say, you’re basically saying, hey, this
61
is the product that I want. Put this one in there and you don’t need voiceover
62
for it, then definitely use ingredients. It’s a really, really nice tool for that.
63
Now, all of this is great and you can do a lot of this on other AI tools. Also, maybe not to
64
the same clarity. This is so good at realism. It’s actually it’s incredible how realistic
65
it is. It really stands head and shoulders above a lot of else what’s out there.
66
But scene builder is where it really takes off.
67
And we’re going to talk about that in the next lecture. This is where you’re going to start
68
building your whole movie if you want to. And it’s really the the main tool inside
69
text to text the video. But we’ve seen builder is what makes this tool probably worth
70
the high, quite high budget that it is, but really, really makes it worth using.
— VEO: SceneBuilder – Using SceneBuilder in Flow to Create Complete Scenes —
1
Now, the last tool and probably the most exciting now we’ve gone through text to video, frame
2
to video, ingredients to video is Scene Builder. This is where this tool really stands apart
3
and you can start making full scenes in one AI tool with audio and with speech also. Really,
4
really exciting stuff. So now, do you have an idea yourself or do you want to have one
5
generated randomly? Something I haven’t shown you, you probably already got an idea you
6
want. But if I go into WISC right here, let’s close it. We spoke about this many times.
7
Let me just remove this right here. See this image of a dice? This just rolls the dice
8
or making something completely random. So let’s roll the dice. A photo of a cat phone
9
in space with an astronaut suit with the earth in the background. I’ve done something similar
10
to that. So let’s roll the dice again. A close up photograph of a pair of mismatched socks
11
in different patterns. No. A still of a food fight in the middle of a restaurant. Cartoon
12
shaped characters throwing food everywhere. Okay, so I kept rolling the dice until I got
13
something I wanted. This is the way to get something completely random. Wide angle framing
14
of a packed lecture hall. A gorilla in a tweed suit and glasses in front of a chalkboard
15
giving a lecture. Digital painting. I’m going to change that to ultra realistic. Photo realistic.
16
And let’s run that. You could, of course, be doing exactly the same thing right here
17
on Gemini. I could say, hey, give me some images. In fact, I’ll do this on probably
18
the next lecture, I think, or the one after that, where we’re going to be speaking about
19
different ideas and do some topics and we could find some viral stuff. You could go
20
give me some ideas for viral videos and give me images of it. And you could generate this
21
inside Gemini 2. But I like WISC. Okay, this is nice. This gorilla is huge, but I do quite
22
like it. Great. So let’s grab that, put it in here. And then once again, I’m going to
23
grab the prompt from it. Okay. An eye level indoor shot. Auditorium classroom facing a
24
large. Okay. Yeah, exactly. All right. Let’s copy that. Let’s go into flow. And first thing
25
I’m going to do is text a video. I’m going to paste that in and I’m going to let that
26
run. Now that’s finished generating. Let’s take a little look at that. And this is a
27
simple example of evolution by natural selection. Nice. So I didn’t ask for anything, but it’s
28
given me some voice in there. If you didn’t want voice, you could have done this in VO2.
29
And he’s talking about evolution, which is very, very funny considering he is an ape.
30
So now here’s where the magic happens. So I can now click add to scene and it’s going
31
to populate right here in scene builder. And you can see I can scrub through here. I can
32
like take the front of it, take the back of it if I don’t want the whole thing. So see
33
if I play this and he starts talking there. So instead, I can just grab it from here.
34
That’s fine. Crop that off. And then I’ve got a choice here with a plus icon. You see
35
I can go either extend or jump to so I can extend this shot right here. But I know there’s
36
some limitations right now. So if I extend this shot, so if we go here and go extend.
37
OK, what should happen next? The gorilla turns to face the chalkboard and starts to write.
38
Now, if I click go, it’s going to say, hey, I’m going to switch you to VO2 again, which
39
means if we’re extending the shot, there is now no audio in there, isn’t it? But let’s
40
extend it and see what happens. All right, let’s finish this play. This is a simple example
41
of evolution by natural selection. OK, so he turns it doesn’t have anything on the back
42
of him. That’s quite funny, actually. So you can see it turns from one shot to the next.
43
I’m not sure whether I don’t like very much that it it starts to go back out. You cut
44
it right here if you didn’t want it and do this. But the other alternative you could
45
have. So now we’ve got a scene, haven’t we? Eight seconds. Now we’ve got this extra bit
46
right here. And you could now do this. So you could even not have that shot altogether
47
if you didn’t want it. Or let me just cut this right here. Now go jump to. And I’m going
48
to say I’m going to change this back to VO3 and I’m going to say jump to close up so profile
49
shot of a gorilla writing on the board. He suddenly that’s meant to be. He suddenly looks
50
down and gasps. Shocked. OK, so this is now jumping to. So it should jump to that shot
51
we’ve just described. But once again, even though I’m using jump to, we moved us from
52
VO3 to VO2. So we’re not going to have any sound effects for that, which is frustrating.
53
But this should change soon. So hopefully you’re not seeing that. Let’s run this. And
54
this is a simple example of evolution by natural selection. All right, let’s play this through.
55
So now it goes to. He turns to the board and then is meant to jump to another shot
56
of it. But it doesn’t. It doesn’t jump to that at all. So whilst this has the opportunity
57
to extend your shots, to extend your shots and be really nice with continuity, it has
58
trouble still with understanding your prompt and jumping into moving that the other way
59
around this, of course, is to go back to here and then reprompt again. So if I was
60
to take that initial prompt that we had right there, I could then change this instead of
61
a close up side profile shot of a large anthropomorphic gorilla dressed in a brown suit. And I’m saying
62
he is writing on the chalkboard. So let’s run that. So now we have a different shot
63
here where we do have the side of gorilla drawing here. But the consistency is not right
64
between one and the next. What he’s wearing, the style or anything like that. So the way
65
inside VO3 to get good consistency is to use scene builder. But you are going to have to
66
have to keep running through this and trying different things. Actually, let’s take that
67
man we were using before the homeless man. If I just remove these, by the way, if you
68
want to remove any shots from here, go to arrange and then you’ll go minus, minus and
69
then click a minus and erase those. Click done when you’re done. So now we’ve got this
70
man. Oh, come away from there. That’s if I wanted to upload anything here, add frame
71
as an asset. Well, maybe I can mention that now. If I wanted to, for example, if I really
72
liked this shot right here, I could add this as an asset and it will start uploading. And
73
then I could then be using that image for a video later if you wanted to use a different
74
frame for that or have it. Let me see. It’s uploaded right now. I could then click it
75
and I could say, hey, like this one, let’s make an image from that one, please. Like
76
it’s right here on the start frame. Let’s close that, though. So right here, I’m going
77
to jump to let’s go this man’s hands in close up, rubbing them together nervously, nervously.
78
And let’s see what that does. Now let’s play this to run this down. And it cuts from here.
79
And now I want to cut to the man. He’s rubbing his hand nervously, but it definitely doesn’t
80
cut away to it. That’s why I’d go and cut away to a close up that he’s definitely rubbing
81
his hands nervously. It’ll take some playing with. No, it’s not ideal at that and it’s
82
not amazing, but it is a way to build a full scene story. If you wanted to, for example,
83
have that man’s close up hands, I would then put that image back into WISC and I would
84
say make me or Gemini is pretty better for this. Put it into here and say, run me the
85
prompt that I want to do for a close up of this man’s hands. No, it’s always perfect.
86
And this definitely isn’t, but it is pretty good. This is how you build a scene up. You
87
have to keep playing with it. You have to keep playing, but it is a great way to start
88
doing that. I could be then uploading other things like images and things like that, or
89
whole new things inside here to put my scene together. Now that being said, you do want
90
character consistency. If I was making a character for this guy and I want him in every single
91
shot, then I would be doing certain things in a certain way to make sure I’ve got character
92
consistency. If I’m telling a whole story of a man, like if this was a story rags to
93
riches from this guy rich to this guy poor, then we could do that using character consistency.
94
Let’s talk about that in the next lecture.
— VEO 3: Character Consistency: Keeping the Same Characters Across Video Gen —
1
Now let’s talk about character consistency we’ve seen in the last lecture that when you
2
are using flow and if you were trying to build a scene for example that there are often errors
3
with trying to get certain scenes done or for the AI model to understand you want a
4
different shot of the same person and things and this is primarily a text to video based
5
tool so you might want to instead be thinking about trying to get character consistency
6
in your text based prompts if you want different shots. Now there are certain ways to do this
7
lots of different ways and I’ve shown you in WISC and you can get yourself your prompts
8
from here. I like to use Gemini for this and I’ll show you my process that I use. Basically
9
I asked Gemini for a description then tell them I want to have character consistency
10
so give me a description that I can use time and time again something like I start with
11
I want to create text to video with V03. I need a text prompt that will ensure character
12
consistency for a character. I want to generate multiple shots for in different scenes and
13
outfits. That’s how I always start with and then I describe my character a woman aged
14
50 sullen face worn wrinkled. She has short gray hair white and slim. Please provide a
15
prompt I can use for this character not costume or location to get the same character each
16
time also her voice keep a constant that’s meant to be a constant constant British accent
17
and gentle old voice. So let’s run that and see what Gemini comes up with. Here’s a prompt
18
focusing on character consistency for a 50 year old British woman woman appearing around
19
50 years old. She is constantly she’s consistency sullen face expression conveying weariness
20
and passage of time evident through wrinkles. She has short naturally gray hair styled simply
21
her build is slim her complexion is fair her voice should be consistently a gentle older
22
sounding British accent. That’s I don’t think from past years that’s enough. So I’m going
23
to say run this again and add a lot of description for her facial features to ensure consistency
24
each time I prompt for her. Okay, let’s run that. Okay, now it’s got more. So we’re now
25
prompting for her style of nose. Her eyebrows are thin naturally arched and then hair is
26
straight medium size lips are thin natural yet downturned expression her jawline is defined
27
but softened. Okay, nice. So let’s run this. And let’s copy that. Let’s go back into flow.
28
Let’s go new project. We’re going to do text to video. I’m going to paste this in, do this.
29
And then I’m going to first of all, I’m going to say she is sat in a church dark, moody
30
lighting. She is sat and framed to talk as if being filmed for a documentary. She says,
31
this is my story. I’ve never told this before. And let’s run that. Now what I want to do
32
is compare that. So if I put in exactly the same prompt, that’s a prompt for the woman
33
and the way she looks. This time, I’m going to say she is on the streets homeless. She
34
is positioned to being filmed for a documentary talking to camera. She says it all started
35
here on the streets. This was my home. Okay. And let’s run that one also. All right. So here
36
are my shots once again in VO three for quality. So let’s have a look at this first one. I’m just
37
going to scroll up quickly and see the next one. And yeah, when I put these together,
38
that could definitely be the same woman. Couldn’t it really, really good. Okay. So let’s click that
39
and play it. This is my story. I’ve never told this before. Definitely a British accent and look
40
how real it looks. Let me just make that bigger. Look how realistic this movement is. This is my
41
story. I’ve never told this before. This incredible, this would have been some only a matter of years
42
ago, it would have been some crazy effects CGI needed to do this. And here is the same woman
43
on the street here. I can’t believe how good it is. It all started here on the streets. This was
44
my home. Yeah. It’s even got the same accent, the same way she’s speaking consistency there. It all
45
started here on the streets. This was my home. So this is where you let me, let me pause you there.
46
This is where you’re going to get character consistency because we’ve seen when you’re
47
doing scene builder that sometimes you’re not quite getting the shot you want. But if you are
48
using something like Gemini to generate a person’s prompt for the way a person looks, and then you
49
are asking them to be, okay, here’s my scene in the church here. And she gives her intro and then
50
we cut, maybe there’s some shots in between of the location. You could do, give me some shots
51
of a rundown city area in London somewhere, and then cut to her. Now you’re putting together a
52
documentary and this looks really, doesn’t it? You’re putting together a real video. Now you
53
could do this with whatever you want with animation. You could be doing it with your
54
own shots, your scenes with something in outer space, a Western, whatever style you want,
55
anything you want. But the way to get best character consistency on a text to video model,
56
which this is where you’re not using a reference image for a person is if you want to go in and
57
generate the person’s description prompt inside of Gemini. And that way you can see how you can
58
get real good consistency between the people. The only other way you could do it is if you are
59
going frames the video and then you are limited. Of course, we know it’s going to go to VO2 right
60
now when that’s done and you’re able to use VO3, so have voices, you’ll be able to upload an image
61
that you’ve generated either on here. If you want to generate an image like this, generate an image
62
and have the same person each time or yourself, an image you have or on another AI tool. If you’re
63
creating images and you’ll be able to generate video from there. But this works really well.
64
I could create a whole story with this woman in multiple locations, give her multiple interviews
65
and talking about things. I could have a walking down the street now showing you where she lives,
66
showing you I used to sleep here as she points to a dumpster and stuff like that. Really good
67
consistency. So that is the best way to get this character consistency, which is a big one that
68
people are asking for. How do you do this on a text to video? Well, VO does understand and it
69
manages to remember intelligently what the person looks like. Really good. Really, really amazing.
70
So I guess the only thing left to do is let’s make a few more videos using what we’ve learned
71
and let’s finish off the course with some actual projects making some video together. And you can
72
see me in real time, make these on what I’m doing so you can copy and learn from my own projects.
73
It also might give you a few ideas of stuff that you might want to make using VO3.
— VEO: Character Reference and Consistency Using Ingredients to Video in Veo —
1
Now, another way to get character references or character consistency, I believe that Flow
2
and Veo call it character references, but using ingredients which we touched on before,
3
I know in the last lecture we spoke about how with Veo 3 because we know ingredients
4
currently only available on Veo 2. If you’ll see this change, you’re allowed to use a Veo
5
3 in the future, then you can just straight away use this with Veo 3 and either use text
6
prompts or the way I’m about to show you now to get character consistency throughout your
7
videos. So you can see right here, for example, I have been putting this character in multiple
8
places and I’ll show you how I did that because I created my character right here and how
9
you can do that. So using ingredients, you can make sure you get the same character every
10
single time across all of your output. So you could put your character in multiple
11
locations. For example, I’ve got this blue, fluffy, pink character here underneath a sea
12
type animated. Let me play it for you. These are animated scene right here and also the
13
same creature I’ve got in the real world walking down a dusty street in the countryside in
14
America, it looks like. I think that’s what I prompted. Yeah, South Countryside, USA.
15
So I can make sure I get the same character. So if I’m making, for example, this looks
16
like a children’s animated video or you could be using yourself or real images you have
17
put them in multiple scenes to be able to tell your story scene after scene. So you
18
can do that by clicking here. Make sure you’re in ingredients to video. I can click the plus
19
icon and I can either upload if you already have an image of yourself like we had me here
20
or that’s really simple. Obviously, just upload and put the person in. I’m going to show you
21
another way here by creating image and let’s create a little animal creature. So we can
22
put them in multiple locations to start with. So if I go generate image right here, OK,
23
what do I want to generate? Let’s say I want to generate a fluffy, small creature in the
24
style of Pixar. So I’ve said what it is. We go back to our points for prompting what it
25
is and in what style. So they know it’s in Pixar, a black and orange in color. That’s
26
what they are. They are small, cute, fluffy, cuddly. So I keep using this emotive language
27
like cuddly and then they are cute. They’re also fluffy and they’re small and I put mini
28
monster so they understand. And what I like to do for this, I’m creating almost like a
29
reference image to use again and again and again. I say put them on a white background.
30
Then I’ve got nothing that could when I’m populating this in the future and I’m saying
31
put this in X, Y, Z scene. I’ve got nothing in my original image in that scene that could
32
influence the output from VO for my background. So let’s say that let’s run that and see what
33
that comes up with. OK, they’re generated really fast. So here’s my little black and
34
orange fluffy, cuddly creature. He looks cool. I like him the most. OK, now the next thing
35
you do is you click use this image, click and it’s going to be populated right down
36
here. It’s also if I go to add anymore, he’s been put right there so I can use them and
37
I can say, hey, with this person, put the mini monster in a put a mini monster under
38
the bed, sneaking out in a children’s bedroom in the style of Pixar animation as if there’s
39
monsters under the bed. But this one’s a very cute, cuddly monster. So let’s generate that
40
and see what it comes up with. While that’s generated, I’m also going to do the same thing.
41
Click the plus icon, add them. And I’m going to say this time, put the mini monster walking
42
down the sidewalk in New York City. Realistic. So this time I’m trying to prompt for a realistic
43
scene as if I’ve got an animated Pixar style character. Realistic. It may slightly make
44
my character slightly more realistic, more like think Sesame Street kind of feel. The
45
monster is small, only one foot tall. This is the hardest thing to prompt for. And you
46
might have to do multiple prompts because when you say it’s in a city monster and you’re
47
trying to tell it it’s small and it’s going to be only a foot tall, it’s difficult for
48
it to comprehend sometimes. So let’s prompt for that and let’s see. And lastly, let’s
49
get both our monsters together, shall we? So I created this little guy earlier. I’m
50
going to say these two mini monsters. These two mini monsters eating ice cream. That’s
51
supposed to be ice cream on a park bench in the style of things like these typos. Don’t
52
worry too much about it. I understand what you mean. In the style of Pixar animation,
53
sunny day, bright. Let’s even get both our characters together. Okay, this first one
54
is finished. Put them in the monster under the bed, sneaking out in a children’s bedroom
55
in the style of Pixar animation. Let’s play this. There’s my monster. Definitely style
56
of Pixar animation. There’s a children’s bed. It wasn’t exactly under it, but I could reprompt
57
and say poking his head out from underneath the bed, sneaking out right here. I’ve just
58
said under the bed, sneaking out. So he’s definitely sneaking. He’s crawling and looking
59
around and I’ve managed to get the same monster right here that I generated with my imagery.
60
So let’s see if we can match him and we put him in the real world and also with the other
61
mini monster still in a Pixar animation style. Oh, this one’s just finished. Okay, nice.
62
So this one, I put the mini monster, the same one here, walking down the sidewalk, New York
63
City realistic. Let’s have a look. Yeah, I mean, it looks realistic. It doesn’t look
64
realistic. I think I’d go as far as that is blurred background. Looks slightly like Pixar.
65
Very good quality. You know, like when in Toy Story, they enter the real world in the
66
moving house and stuff like that. It’s slightly better than that. More realistic, but I wouldn’t
67
say it’s photorealistic, but you get what I mean. This monster may have just entered
68
the real world. So now I’ve got the same character. There’s a slight bit with the horns in this
69
one and not in this one, but you could just reprompt for that. And then the last one we’re
70
going to see if we get our two little monsters together. Oh, and that’s finished. And here
71
we are. These my two little monsters. That’s definitely both of them eating an ice cream.
72
Very sweet. Sat here on a park bench, munching on no ice cream, looking at each other. That’s
73
really, really good. So the bit you might have available now that I can’t show you right
74
now at the time of recording this, you may be able to do this and select VO3 instead.
75
And then you’d have the only difference is you’d have audio and you could make them talk
76
if you wanted to, or have background music and sound effects and things. So that’s how
77
you can make sure you get the same characters. And now I could create a whole story, a children’s
78
channel if I wanted to, or short movie, put in this character, the same character I have
79
in multiple scenes with consistency every single time using ingredients to video. Now,
80
this doesn’t have to be obviously someone that you’ve created. We created this image
81
here. I could use, say, meat, and then I could add another ingredient. Let’s add an ingredient
82
here. I’m going to upload this. So from my desktop, I’ve got a picture here of this is
83
like an old British library kind of reading room. Let’s crop and save that whilst that
84
uploads. While that’s waiting, I’m going to say, this man sat, this man sat on a chair
85
in this room close up. So I’ve got this man, which is a me from the lectures, but I’ve
86
got this room, I’m going to say this room, old British library reading room. Yep, let’s
87
go with that. And that’s the reference for that. Let’s run that and see how well it does
88
putting me in there. Now let’s finish generating. Here we are. That’s that you put the microphone
89
in there, which I didn’t prompt for, I guess it’s taking it from my first image here. Always
90
better if you can to have now you see the image of me, I have a microphone in shot here,
91
better to have as clear an image as possible, which was possible when we generated it. I
92
could ask for this on a white background if I wanted to and use that as a reference when
93
I generate. But there’s me inside this exact room. That’s the exact sofa that’s in there.
94
I let me check that. Yeah, look, same pattern and everything. Wow, really good. Here’s me
95
reading a book inside even got the fire crackling here, turning the pages looking through the
96
book and then it cuts to a close up of it. Wow. So you can take even yourself if you
97
wanted to. It’s even done the lighting for that image that room to make sure it’s white
98
on this side really lit just like the room is. If I go back to the image here, you can
99
see the light source. So I could put me in multiple locations. You could put yourself
100
or a person you’ve created or images you have rights to and put them anywhere and create
101
a whole stories. Now I could make a documentary movie and put me anywhere I want to telling
102
a story. The the possibilities are truly endless with what you can do. So this was character
103
references and getting the same references right now using ingredients to video which
104
you saw me use previously when I was adding in objects great for marketing videos. And
105
this is how you use ingredients for character consistency. Again, right now, VO2 probably
106
if you click this in the future, add VO3 and the only differences you’re going to be able
107
to prompt for sound effects, music or if he’s talking. So I would have old classical music
108
goes in the background and he says, I used to read this book all the time every time
109
I visited here and he would be speaking at this character me. Amazing stuff. So that
110
was character references. And then I would prompt for head movement and things like that
111
and how I want him to say it. Really, really good. I hope you enjoyed that. Let’s go on
112
now funny. So let’s go on and create some of these viral videos in the next few lectures
113
and start using all these tools we put together to create some videos and you can see me go
114
through them step by step now. So you can use these yourself and get going making whatever
115
videos you want to create.
— VEO3: Recreating Viral Videos with AI —
1
OK, now the fun part, let’s get into this and let’s make some projects, some actual
2
video projects based on all of our learnings that we’ve just done inside this section on
3
this course. Let’s actually do some text to video and make something a little bit meaningful.
4
Now you could be asking Google Gemini, hey, give me some ideas and I’ll do that in a moment,
5
but I’m a big, I’m a big advocate of the AI of the AI space, obviously AI video. And
6
I know some of the biggest trending stuff like the singing Bigfoot right now or street
7
interviews, influencers doing stupid challenges or ASMR stuff. So I’m going to create some
8
of these projects because I always get asked by students. They send me links to these videos
9
that like, how do I make these? How do I make these? And it’s often the same answer. You
10
just need the prompt for it. And I’ve shown you how to take screenshots even and get prompts
11
for that, how to do that automatically. So I will just show you exactly how you do this
12
over the course of a few mini projects. Let’s do it over the next few sections. So what
13
I’m going to do, though, is in Google, I might be like, ask it. I want to make a video with
14
Google with VO3. I want to make a video with VO3. Give me 10 ideas for viral video ideas
15
that are popular. So let’s do this. And it’s just going to take a little second and work
16
it out. Okay. A special capability is a realistic human character. So my pet peeve. Okay. There’s
17
the concept and the prompt idea, a day in the life. That’s quite a nice one. POV, you’re
18
experiencing something unusual, dramatic. You could be like in the middle of a volcano
19
or something. Unexpected transformations and reveal hypothetical debates and scenarios.
20
Authentic storytelling vulnerability. AI reacts to and then human nature and stuff. Satisfying
21
with a solution. Quick explainer. Myth busting. Nostalgic trips. Okay. So these give me some
22
ideas and I would prompt for like 50 of these ideas or start putting in like links to some
23
of these other videos you’ve seen and giving me viral ideas like this. The first thing
24
I’m going to do in this project is let’s actually do the influencer one. Okay. So I can look
25
on this one. Let me just show you this. So these are influencers doing ridiculous, stupid
26
things. Let me just copy this URL and say, I’m going to say, I want to create a video
27
like this. Then I put the link to the video in here. This is where influencers do ridiculous
28
challenges like sitting like sitting in liquid cement until it hardens, jumping out of a
29
plane. Give me prompts for VO3 with full descriptions and voices speaking, saying what the challenges
30
are. American accents have the influencers being stereotypical in look and challenges
31
and language. Gen Z, funny, comical. Okay. Let’s go. Okay. Let’s see this. A young man
32
with body blue dyed hair may star wear an oversized graphic t-shirt, ripped jeans, multiple
33
city. Okay. He stands nervously in front of a giant tub filled with thick gray liquid
34
cement and sitting in a cluttered bright indoor studio space, a random prop scattered around
35
voice, hyper enthusiastic, slightly nasal Gen Z American accent. Yo, what’s up fam?
36
It’s your boy, Cementslayer69. And today we’re going to do a full ascend. That’s right.
37
We’re going to be chilling literally in this fresh batch of quick dry bet. So it’s got
38
the language for Gen Z. Now it’s actually taken the idea that I had, but let’s try that
39
one first. And then I’m going to come up with my own one. So let’s just copy this and let’s
40
put that into flow. Let’s just paste that in and let’s see that. Now I’m going to work
41
on my own one, do a similar one, but the influencer is a wannabe football player, sports influencer
42
saying he’s going to throw footballs over the fence at the white house until Donald
43
Trump throws one back or the secret service shoot him. Okay. Let’s get a prompt for that.
44
A muscular young man, backwards baseball cap, tight fitting jersey, made a team logo, ball
45
a nation, overly enthusiastic energy. He’s holding several footballs in arms, standing
46
near a tall ornate black fence that implies significance of a government building. Sunny
47
setting day, lush greens lawn. He’s looking intensely at the fence and determined expression,
48
a voice with an overly confident, slightly bro-y American accent. All right. What’s up
49
sports fanatics? It’s your boy touchdown Titan coming at you live. Today’s the day we go
50
big. We’re slinging pigskins over this bad boy until either the goatee himself, Donald
51
J. Trump fires one back or just say the secret service better have their gloves on. Let’s
52
get it. Okay, perfect. Let’s take that one and let’s do exactly the same thing right
53
here. Paste that in. Okay. Paste that in the first time it rejected it right here. I think
54
because I used the prompt Donald J. J. Trump fires one back. Here’s my first generation
55
right here. A young man with blue hair and he’s about to get into a bar, a pole of a
56
bucket of cement here. So let’s play that. Yo, what do you P fam? It’s your boy seven
57
slayer six nine. And today we’re going F U L send. That’s right. We’re going to be chilling
58
literally in this fresh batch, a quick dry. Okay. And then he goes to stand in the cement.
59
It’s kind of morphs at the end there. So let me just go to scene here and I want to just
60
drag along to the end here. He puts his leg up and then let’s go extend. Man gets into
61
cement and sits in. All right, let’s extend that. Yep. And it has to be VO two. All right,
62
let’s play this back and see how I did. Okay. It does. It actually does it. These are really
63
easy to create and I can see how they go viral. I can make these all day long. So let’s go
64
back because that failed again. You know, it said about not allowing Donald J. Trump,
65
I think. Okay. So let’s try this again and let’s try and rework this. Let’s just take
66
out Donald altogether. I mean, it’s good. They have this because you can’t use celebs
67
and then one that you couldn’t and you couldn’t make something, make people say something
68
they couldn’t say, didn’t say go himself, fires one back or they said, it’s better have
69
their gloves on. Let’s get it. Okay. Let’s try that one. Okay, nice. I took out all references
70
to Donald J. Trump there and Donald and I just left it as it was. Let’s see what happens
71
it. All right. What you P sports fanatics. It’s your boy. Touchdown tight end coming
72
at you. L I V. Today’s the day we go B I G. We’re slinging pigskins over this bad boy.
73
Nice. Now we’ve only got eight seconds, obviously. So I would just adjust it. The, the amount
74
of texts and you probably put that in your prompt too and say, yeah, I can only last
75
eight seconds. The whole thing. He talks quite a bit there and he’s got a food jersey on.
76
He’s got the Eagles. He’s holding a whole load of footballers right there and he’s going
77
to throw them over. Definitely a white house looking building works, really works. You
78
could create these very easily. I can see why they’ve got on the, on the door brothers,
79
almost quarter of a million views right now and on impossible challenges, almost a million
80
views for this really good, really easy to make and funny, enjoyable. You can make these
81
easily. Now you have these. All right. So the next one, I want to create something.
82
Let’s go on to the next lecture. I want to create something else that I saw now for the
83
next project. I want to reenact these. These are glass ASMR, chopping through glass fruits.
84
Really nice. Okay. So let’s find something. Do I want to do glass fruits? Yeah, that seems
85
like a really nice one to do. All right. Let me do this. This time I’m going to screenshot
86
this. Let’s just grab that. When I come on over into whisk and I’m going to remove this
87
and I’m going to just grab that and drop it into here. It’s analyzing the image. Now let’s
88
see if it works out what it is. Okay. Let’s see what it says. A closeup shot of a single
89
vibrant red berry glisten with dark racist liquid, possibly chocolate or syrup. Okay.
90
Can’t work out exactly what it is. So let’s copy this and let’s go into flow and let’s
91
do exactly what we were doing before, but let’s read what it is. Okay. Let’s say this
92
made of glass, a silver knife blades cuts into the strawberry, into the glass strawberry
93
from the right side of frame appearing to be in mid motion because of the screenshot.
94
Let’s remove that and let’s take this. Okay. It cuts through the glass and the strawberry
95
falls in half. Let’s say the glass strawberry falls in half. Really nice. Now let’s do this,
96
but let’s not do a straw. We’ve already seen that. Let’s make the whole thing an apple. Okay.
97
And this one is a glass apple. And this one is also an apple. Okay. Nice. So now I’ve got the
98
background is also this warm yet. Yeah. Tone. Okay. And it’s made of glass. The background
99
reduces the darker out of focus tones. Nice. Close up. Okay. I have an apple. Let’s do this.
100
Let’s send it. Now to compare, I want to just put this into a really simple, a knife cuts through
101
a glass banana. The banana is made of glass. Sometimes I often say things twice to make
102
sure they know. Close up. ASMR. Satisfying. And let’s just correct these typos and let’s send.
103
Oh, okay. So the first one’s done. Here is a glass apple. Let’s see what happens if I hit play.
104
And the noise. Okay. The ending was a little weird there. It looks like a real
105
apple on the inside as it falls in half. So I would reprompt that and say
106
the inside of the apple is glass. But that first bit when it comes and the noise of it
107
cutting through glass, like it really knows what cutting through glass would sound like.
108
That’s amazing. That’s so good. Okay. Let’s see the second generation right here. Almost done.
109
Okay. This one was less prompted. Remember there’s no color mentioned anything like that.
110
The glass banana. Yes. And it’s see-through. So you could say yellow glass banana,
111
but let’s see if it got what I meant from a very simple prompt.
112
That was satisfying. That’s really nice. So there are loads of videos, especially TikTok
113
and other social medias that are just people want to see what happens. And they would watch
114
this over and over because this is really, really satisfying. I know it sounds weird,
115
probably to some of you, but I would watch these shorts. I would scroll through them
116
and watch a knife cut through loads of different types of glass objects.
117
I don’t know. I don’t know why. I don’t know why we watched that,
118
but that was really simple to do. Even a simple prompt. I would just probably change this
119
to a yellow glass banana. The bananas made of glass, but even with a simple prompt,
120
it understood exactly what I meant. Even gave me a really nice composition.
121
I said, it’s close up. This works really well. Okay. Second project. You can see how these were
122
getting really good views, really satisfying to watch. So there is another one for you.
123
Now we’re on the ASMR train and it’s a really good one to, to, to work on. They’re so popular.
124
Original ASMR was with somebody whispering. Okay. So let’s go into the next project. And I want to
125
actually see if I can make a realistic person who can make ASMR. They would take over ASMR channels.
126
You don’t actually need to do it yourself anymore. You could do all with AI. Okay. For this one,
127
I’m going to do the next project. Let’s do ASMR. That’s when, if you don’t know what ASMR is,
128
I can just search here. Let’s go a S M R and let’s put that in. And you can see it’s normally
129
people. This person is cutting it, but you see, they’ve got a microphone with their nails,
130
or also they whisper into a microphone and it’s called auto sensory something or other.
131
And it’s where you get kind of tingles over as a nice, relaxing feeling. Some people have it to
132
go to sleep.99.9% of you sleep with this ASMR ear cleaning and they, the sound goes from one side
133
to the other, to the other, to the other. So it is really nice. So let’s see if we can replicate
134
something like this. Okay. Let’s go back into flow. Now I’m not going to use any other tool
135
like we normally use a whisk or Gemini or something. I’m going to type this in and see
136
if we can use our own prompting. So first I’m going to start with a character. Okay. I’ve said
137
a young woman, age 22, mixed race, attractive, slim is sat at a microphone on her desk. She’s
138
doing an ASMR video. We see front on that’s meant to be front on as if we are the viewer. She has
139
long fingernails and taps on the microphone and whispers, is this ASMR working for you?
140
And I’ve divided up. So people doesn’t try to say working for you. I’m going to say a S M R
141
maybe actually to make sure I’m going to put points in front of these. Let’s do that. ASMR
142
working for you. The background is her bedroom, but out of focus. Let’s run this and see if we
143
can create ASMR channels. Cause if so, these channels rack up millions of views, 1 million
144
views, three months ago, ASMR Jade, they rack up millions of views. People watch them to relax,
145
to go to sleep, enjoy them a lot. So you could create a whole channel of different people or one
146
person and create your own person doing ASMR. Amazing. So let’s see what flow does now. I’ve
147
given it that prompt. Okay. This is just loaded so far. So good. We have a mixed race girl,
148
attractive in front of a microphone. Let’s see if the action that I said about rubbing her nails on
149
there and whispering comes out. Okay. I’m going to play it. So the first time with you,
150
is this ASMR working for you? I can’t believe how perfect that came out. That is exactly what
151
ASMR is running their fingernails on the microphone so that it gives you that auto-sensory feeling,
152
whatever it’s called and whatever it is, but she’s whispering, tapping her nails.
153
It looks extremely realistic. Let me make this big. Look at how realistic she looks. The lip
154
syncing is perfect. Nails, the sounds of the nails tapping on a microphone, everything. Now,
155
the only thing you’d want to do is inside your editing software, what they do is when they move
156
from one side to the other. So I would generate multiple of these is you could then send your
157
audio, the noise coming through as she whispers from one ear to the other, to the other. And
158
you’ve got yourself an ASMR video and channel and person you’ve created. You created a whole
159
personality for ASMR. This is incredible. I can’t believe how well that came out from that prompt.
160
You could start an ASMR channel today with this. Start a channel on all of these. In fact,
161
there’s one more thing I want to try in the next little mini project. Let’s try this next.
162
Let’s go on to it. Okay. So the last part I wanted to try was something that was influenced by
163
this. This person does street interviews. And I look at, I watch a lot of channels,
164
they come up all the time. Then there’s a famous guy that runs up and ask people what they do for
165
a living. So I’m going to do that street interviews. Okay. So let’s do. Okay. This is a complex prompt
166
and I’m going to tell you why. So a young influence is doing street interviews. I’m
167
not said what he’s wearing or anything, by the way. He’s 20 years old, skinny, gen Z,
168
very energetic. He’s running down the street in busy New York city. He runs up to a man age 50,
169
overweight and strange looking. I haven’t explained what strange looking means and ask
170
him, Hey, what do you do for a living? And the man replies, I kill people. And the young boy
171
looks shocked. Uh, the young man looks shocked. So a play on that. I don’t know if you’ve seen
172
the channel where they say, Hey, what’d you do for a living? Hey, what’d you do for a living?
173
And then find out what people are doing and how they make money. It’s a play on that ways,
174
obviously run up to a serial killer or something, or someone’s someone’s just making fun of it.
175
So let’s see, this is complex because I haven’t told him what it looks like. He has to run down
176
the street and speak. And this is all in eight seconds. I haven’t said what a strange look is.
177
And the response I say, the man says this, the boy says this, the man says this, the young man
178
says this. So I need to see if they understand what I mean by this scene. If they do,
179
VO3 is extremely intelligent and complex in its comprehension of prompting. So let’s see
180
how it does. Okay. I haven’t played this yet. It looks like many people are running. I don’t know
181
if this is the guy there’s only eight seconds to do this whole scene. I’m going to play it for the
182
first time with you. Hey, what do you do for a living? I kill people. Okay. So you can see how
183
it messed up because he asked it. Hey, what do you do for a living? And then I kill people. He
184
replies, but in his voice. Okay. So it’s nearly there. Let me see if I can work on this prompt a
185
little bit more. A young influencer is doing street interviews named Zach. He’s 20 years old,
186
Gen Z, skinny, very energetic. He’s running down the street in New York city. He runs up to a man
187
named John, age 50, overweight and strange looking. And I’m going to say, and Zach asks,
188
John, Hey, what’d you do for a living? John replies. I kill people. Zach looks shocked.
189
Let’s see if I assign them names. So I’m getting some differentiation between this,
190
whether that helps with the model. I’m not sure we got pretty close then. Let’s see. Okay. Once
191
again, I haven’t played this. I’m, this might be John. This might be Zach. Let’s see what happens
192
here. Hey, what do you do for a living? I kill people. Oh, it’s still done. It is still got it.
193
So I need to keep working on that prompt to see if I can do this. I’m going to actually
194
take that prompt right here. Let me copy this and let me go over to Gemini and say, I am trying to
195
create a prompt for VO3 about a conversation between two people. It keeps assigning the wrong
196
voice to the wrong character. Here is the prompt. Please make it clearer for VO3.
197
And then I’m going to paste in the prompt. Doesn’t really matter about typos. It understands,
198
but let’s do this and let’s see what it comes up with. And if this can fix it.
199
Okay. Young skinny guy, busty straight, migraine, bunches of no man, John, age 50,
200
stranger than Zach, speaking in a fast, energetic Gen Z. John speaking in low,
201
do I kill people? Okay. Let’s see if this works right here. And this might be a fix to run it
202
through Gemini or, no, this is a Google product after all. It’s meant to know what VO3 wants,
203
or maybe it’s just a bit too complex. Although just getting this far on a text prompt is so
204
advanced for an AI model. I love it. So let’s see how it does with this.
205
Okay. Let’s see if Gemini was able to solve the problem. Let’s play this.
206
Hey, what do you do for a living? I kill people.
207
Almost, almost got there. Hey, what do you do for a living? We might just have to keep
208
reprompting. That’s one of the fun of AI, isn’t it? That didn’t quite get it there.
209
Reprompt again, even the same reprompt, getting a different result,
210
reprompt and reprompt. That’s all part of the fun of AI video, isn’t it? So now I hope you
211
enjoyed those examples. We’ve gone through everything now from going through how to use it,
212
how to get it, how to access, where best to access, how to prompt the most important thing,
213
all the tools, text to video, frames to video, ingredients to video, everything, as well as
214
scene builder, and then some real world examples of projects that people are building and what you
215
could be using. Not to mention, you could also just be doing short films and some serious style
216
content too. It would be great for that. It’s all there in VO3, all the tools at your disposal,
217
please go and play. And by the time you watch this, probably even more available
218
inside here, as far as the generations go and the intelligence of the model itself.
— SORA 2: Desktop – Introduction —
Now saw an update lecture could Sora two has been released and I want to tell you about this.
Inside this course, I’ve still got the Sora one, as it’s now called lectures in Here.
But Sora two has been released both on app and currently on iOS.
But that’s going to be released, I’m sure, very shortly.
And everything I say has a caveat, because it’s changing so much every week that the allowances on
the location and where you are right now on desktop, you would have to get an access code and things,
but I’ll explain that in the next upcoming lectures.
I just want to say that I’m adding this in Sora two, because it is an amazing tool that everyone’s
talking about.
A lot of people are using it on app and creating some really good stuff, and it’s almost its own social
media platform.
I’m going to be covering it, using it on desktop, because we’re looking at making AI movies and scenes.
So you can see me creating lots of things on here over the next lecture.
Some scenes that I might want to be using inside my videos.
So if you want to be doing cameos of yourself, you’re going to have to use the app right now.
That might change shortly, and it’s somewhere in between.
A tool to be used for creating really amazing, realistic videos, but also as its own social media.
So this will probably change the landscape.
I think of AI filmmaking, everyone has access to creating some really good AI videos and posting them
as its own thing on its own social media platform.
It’s changing.
It’s a really great idea to combine AI, video and your own social media platform.
Uh, which saw a two is doing.
So I’m going to cover the update of Sora two using it on desktop, getting access, how to use this.
And then there’s still the lectures on Sora one, because depending on your plan, maybe geographical
location, you might want to use Sora one two.
But the next lectures are covering Sora two.
Let’s get into it.
Let’s have a look at Sora.
— SORA 2: Desktop – Getting Access —
1
So, using Sora 2 on your desktop, why might you want to be using it rather than on your
2
phone? Well, it depends entirely on your workflow. I think using it on my desktop for the projects
3
I make, for example, I make videos. Let me show you some of these here. If I go to my
4
profile and then I can show you in my drafts. I make projects where they are movie style
5
videos. So, I might want to be able to work on this, take a look at the video being created,
6
download this and then use that inside my project. If you are just using it on app,
7
then of course, perhaps that’s what you’re going to be wanting to look at primarily in
8
this course. But you may want to use it on desktop. So, I’ll show you how to use that.
9
The layout is fairly similar, but also access to this and other things like that. Now, I’m
10
going to say this with a huge caveat. Access right now may be different to when you are
11
accessing this tool. It’s probably going to be different next week from when I record
12
this to the week after, to the month after. Right now, if you want to access Sora 2 on
13
your desktop, you may be limited in a location, US and Canada. But again, that’s probably
14
not the case as you’re watching this now. And you also need an invite code. Looking
15
something like this, you might see this screen when you come in. It says, what? Give me an
16
invite code. To stop the servers overloading, because this is a really popular new tool,
17
there are invite codes. The access is invite only. Each person who has the invite and has
18
access has up to four invite codes they can send out. Also, if you join with the iOS app,
19
then you can also get access on desktop. Again, all of this may change soon. A lot of people
20
are trying to get access codes on their Discord or on X or finding ways to get them. It’s
21
just to slow down the uptake because they don’t want to crash the servers. But again,
22
when you’re watching this, probably a different story, and it’s available to you to use. There
23
are also some different plans available here. So if I go on to the billing, again, this
24
is telling me it’s for Sora and not Sora 2. It’s not updating everything on the billing
25
front yet as I’m recording this. Now, it may be that you’re going to be able to get access
26
on the $20 a month plan for ChatGPT, also free and with the app. But you may get limited
27
resolution videos and a limited number. You may also get a limited number on the free
28
plan. And then this is very similar to stuff like VO3, where you may get a higher resolution,
29
longer videos, and more and faster generations yet to be decided. And that will all change.
30
So when you’re using this, when you log in, you’ll be able to see the plans on the ChatGPT
31
and it’ll probably list about Sora 2 access right here. These will change. And also these
32
prices may differ depending on where you’re looking at in the world. It might show in
33
your local currency. But just to give you complete transparency on the uptake and access
34
for this, which will constantly change, I’m just going to show you the tool and how to
35
use that. But just so you are aware. Now, obviously, this is Sora, which is OpenAI connected
36
with ChatGPT. So we can use this a lot when we’re developing our prompts. I could be using
37
something else, another AI tool like Gemini or something else and ask it for give me a
38
perfect prompt for Sora. And later I do have a dedicated lecture on prompting for Sora.
39
But I can use ChatGPT directly, whereas opposed something like Gemini might be grabbing information
40
from external sources and what’s available online. ChatGPT, because they’re part of OpenAI
41
and so is Sora, they’re going to have inside information. So it’s good to use ChatGPT,
42
I think, for when you are prompting or trying to get information to use inside Sora. So
43
we’ll do that within the next couple of lectures. Now, this is Sora, this what it looked like
44
in the next lecture. I’ll go through some layouts so you fully understand what this
45
is all about, what all these mean, where you prompt and settings and stuff. But I just
46
want to show you some limitations or on this lecture right here. If I go over to my drafts
47
right now, I can access them individually here. I’ll show you that in a bit. Or I could
48
go over to all my drafts and you can see there are some limitations right now. Again, caveat,
49
this may change shortly. But when I am on the desktop version right here and I don’t
50
have the iOS app, perhaps you don’t, then you are not able to upload photorealistic
51
images of people. So I try to upload a video of my image of myself to turn into a video.
52
It wouldn’t allow me to. Of course, if you’re using the app, then you are going to be prompted
53
to. And as you see, you’re going to be prompted to record a video of yourself, 15 seconds
54
or so looking at different directions. So it knows who you are and you’re going to be
55
able to use yourself in videos. If you’ve already done that, then you’re probably going
56
to be able to access this upload photorealistic images of yourself or just be able to use
57
you as a cameo when you add yours right here. Now, there are also some other limitations.
58
For example, this one right here, content that violates guardrails containing similarly
59
to third party content. This was a prompt I did for Jake Paul, who is right here, the
60
famous boxer vlogger. I said he’s in a ring and it is Hello Kitty themed. And the whole
61
place, the whole Jimmy’s Inn is Hello Kitty themed. So I don’t think they were blocking
62
because of Jake Paul, because I can use Jake Paul inside a cameo. If I scroll along right
63
here, he is available on there right here. So I think the prompt block there was Hello
64
Kitty. So there’s going to be some things that are blocked with branding that you might
65
not be able to use. But I’ve also seen videos where you can use some branding in shots.
66
So it depends on exactly what it is I expect. And also perhaps also the location that you’re
67
in. I know that there’s some availability restrictions in Europe and places opposed
68
to America because there are different AI rules. But that’s all to be discussed on the
69
release. Perhaps that’s why the release is quite slow, going globally for this tool.
70
So there are some limitations right now. So in the next lecture, let’s go over the layout
71
to understand exactly what this looked like, where you prompt where you add different settings,
72
what all these means you can know your way around. And then we’ll get into some prompting
73
and some creating. And you can see exactly what this tool looks like.
— SORA 2: Desktop – Layout —
Now, Sora the layout.
Let’s understand exactly what everything means here and how to use it before we get going.
So you’re not lost when you come for prompting or trying to find what it is you’ve created.
The site you’re going to access.
This is Sora ChatGPT, and you’re more than likely after you logged in with the same credentials you
have for ChatGPT.
Or you can create it.
You can come here to the explore page.
This is a really good page, and it’s set up much like a social media platform.
You can see if I scroll through here, there are loads and loads and they’ll keep loading for ages.
And you can have a look at all the different ones that are right here.
Also, if I double click one, here’s one with Shaq and some tigers there.
If I click it, I can hear it playing, but also I can open it up and it can come right here.
Now there are comments on this, just like a social media platform would have.
And I can also see if there had been any remixes of this.
So let me find another example here.
Here for example, this one, when you’ve got these guys on the doorstep doing like a hacker style dance,
I can see below here any remixes that have been done with this prompting, with this video, with this,
uh, music’s being used by looks and sound effects.
Really cool.
So you can go along, get inspiration.
Uh, and see people have changed.
They’ve prompted change that to sharks, change that to skeletons.
Probably for Halloween coming up and stuff.
So this is the explore page.
It’s a really good place to go through and see what people are doing.
Any remixes and you can comment like it?
Uh, you can do all kinds of stuff, just like you would with TikTok or any other social media.
This is the explore tab, which is right here over on the left.
Now, if I come down right here to there, I can.
This way I can search for people.
For example, if you know a creator or someone, you can be searching to find people here.
So if I search Jake Paul as an example, I can click here, I can have a look and I can see anything
that he has on here.
So here’s some posts that he’s got right here I can go through and have a look.
Then maybe I want to remix this myself.
Take inspiration or I can follow him or cameo.
So I could do a cameo.
Click.
I can add him in here.
I can’t do if you see Add yours.
Cameo creation is available on iOS, so it’s only available on the app.
I cannot do this on desktop at the time of recording this right now.
That may become a thing of the past very, very soon.
So that’s where you can search for people.
If you know a creator or just search through and find and try and find some people that you know could
be online using this.
Now the next one right here, this notification, this is where you get all your activity and you can
find what it is, anyone that’s followed you.
You can also have a look at the videos you created.
For example, I created this one of this guy walking through New York City.
And it will also tell you I’ll go back to it in a minute.
Oh look, this is the video.
Really nice.
A guy walking through New York City, very realistic, really good.
Also, it will show me if anything’s failed.
So when I showed you those ones were uploaded a photo of myself and it wouldn’t let me.
Or I was using the hello Kitty that I mentioned in the last lecture.
It showed me where it’s failed.
And here’s where all your activity is.
Now, if I come down onto here, this is my profile.
You can obviously change your icon here and your display name and anything else.
And you can see all your posts right here.
So if I go to one of my posts, this is the Jake Paul in a pink boxing ring.
Because I wasn’t able to do the hello Kitty, I could say post and then that would be over here.
It would be under my posts, which means it’s public and people can see it.
Right now this is draft, so you can post it and until you click post it’s not public.
So you can post it.
And you can also see cameos by me, any likes and things like that.
So it’s just like a social media platform, which is what Sora is pitching itself as.
It’s the great connection between AI, video creation and social media.
Whether or not people are just going to take these and post them on other socials or this will become
its own social, that’s huge.
Is a thing to be seen going to happen really soon down here.
You’ve got other things down here which I don’t need to show you, which is settings and your invite
friends where I mentioned you have a certain number of invite codes.
Switch to the old Sora, which you’ll see later in the course.
Log out down here is where you’re going to prompt.
So right here I would type in my text prompt.
We do that in the next lecture.
I can choose someone to cameo with right here if I wanted to, the Sam Altman for example.
And also I can click here to upload an image which, if it’s a photorealistic of a person as we’ve seen,
can’t do on desktop right now.
But I could be uploading an image of, say, a night scene in any city in the world and want to prompt
that image directly or take the style from that image so you can upload it right here.
And also here are my settings.
Now, if you’re on a pro plan, I expect when you when it comes into play in a more expensive one,
I think you’re going to have be able to choose different, uh, different quality 1080 rather than 720
and stuff.
Right now I can I think I’m on the free plan if I remember right here.
Uh.
Let’s see.
Yeah, I’m definitely not on the pro plan, so I’m able to change my orientation from portrait to landscape
social media.
Perhaps I want it for YouTube and my duration to 15 10s.
I’m pretty sure on a pro plan, but again this will change, so I’m going to say it with a caveat.
You can have 1080p and also up to 20s.
Again, this depends on your location.
Only available US Canada as recording this that will change.
I’m sure this will change as you’re watching this.
You may see more things available here, but the layout will stay the same.
And that’s what I’m showing you.
So that’s the layout of everything here inside Sora two.
Any huge updates?
Then we’ll change it.
But I don’t think they’re going to change this platform layout very much.
Oh, look, a hippo coming through a store.
That’s cool.
Okay, so let’s get into the next lecture.
Let’s actually create using Sora and you can see what I do.
I actually combine this with ChatGPT and how I create.
And let’s create some different style videos and see what Sora two has to offer.
— SORA 2: Desktop – Creating Videos —
1
Now, creating with Sora 2, let’s create some examples and see how this tool looks.
2
Now, as I mentioned in the previous lecture, you should be using a ChatGPT, given the Synergy,
3
they are the same company, OpenAI, Sora and ChatGPT are the same thing.
4
So we should be using this for our prompting.
5
And there is a lecture in the next section about prompting with Sora, and I’ll go into that in depth.
6
And I’m talking about things like styles, different styles, because if you are trying
7
to make, I don’t know, a retro or film noir, and you don’t understand what that is, I talk
8
about some styles.
9
So you can go ahead and check those out.
10
That’s a really good tool for prompting and understanding, but we’re able to use this
11
to generate our prompts for Sora, rather than have to guess.
12
So in ChatGPT, I could just start by asking, what is the best structure for a text to video
13
prompt with Sora 2?
14
Now, I mentioned why use ChatGPT, because they are the same company, as opposed to another
15
AI tool, I don’t know, Gemini or anything else you could be using, because they take
16
information from online that they have available and are grabbing data from everywhere.
17
But also, of course, ChatGPT has on Sora inside information.
18
So I would be using this.
19
Now, the perfect structure, it says, has scene overview and setting, subject and action.
20
So this is where we are, what time of day it is, the subject, a person, what they look
21
like, the action, visual styles, and the camera direction.
22
Is it in a style, film noir, retro, the camera direction, does it follow back, zooming?
23
And then the lighting and the mood, is it moody, dark, gritty, bright, sunny, composition and framing?
24
Is it a closeup?
25
I’m going to talk about this later.
26
Is it a wide shot?
27
Is it from the side, over the shoulder, and such like lens movement?
28
Is there a racking focus?
29
Does it go from blurry to focus, for example?
30
And then a style reference or realism level.
31
You might want to say realistic, photorealistic, and things like that.
32
Now there’s example templates right here.
33
For example, a quiet Tokyo street in the rain.
34
Subject is the person, what they’re holding or wearing.
35
And you can go into more details here to describe the woman exactly how she looks, age, ethnicity,
36
clothing, any facial features or any other features you want to mention.
37
Film style, the cinematic look.
38
I love using cinematic look.
39
Camera follows her from behind.
40
Lighting and mood.
41
Neon lights glow softly.
42
Pink, blue, rain, mist diffuses the light, dreamy atmosphere. Really good.
43
Composition and framing.
44
Symmetrical composition.
45
So she’d be in the center of shot with buildings, pretty symmetrical either side. Lens movement.
46
You can say stuff like shot on a 35mm lens if you’re familiar with that.
47
For example, you could have a really tight lens, 200mm, or a really wide lens like a 12mm.
48
It’d be almost fisheye.
49
Shallow depth of field.
50
That’s where the person’s in focus and the background is blurred.
51
Something like portrait mode on your phone you might be familiar with.
52
And a smooth, steady cam motion.
53
Or is it jolty and shaky if you want to create tension, for example.
54
Now style references.
55
Who’s it inspired by?
56
You could be using a director, photographer, artist.
57
And you could be using other movies, for example, Blade Runner in this example.
58
An ultra-realistic 8K film texture. Aspect ratio.
59
We’ll actually set this in our settings, but you can prompt for it, but no need to really.
60
So given that, let’s actually ask it for, okay, create me a prompt for Sora 2.
61
It doesn’t really matter about actually typos. It does know.
62
For Sora 2, I want a white male, age 25, in a Knicks basketball jersey, walking through
63
Times Square in the day.
64
Camera follows him as he walks and he is smiling at people and saying hello over friendly.
65
Now what this allows you to do, I could of course just follow this prompt right here
66
and just make sure I fill in with what I want.
67
But it allows you to talk in a more conversational way and describe like you would with a friend,
68
what it is you want, and then it can format it for you rather than you have to concentrate on the format.
69
You can concentrate on saying what you want to say.
70
So let’s run that and let’s have it make this prompt for me.
71
They’ve set a 25-year-old male wearing a blue New York Knicks basketball jersey, walking
72
through Times Square during the day.
73
The camera follows him from behind.
74
If you wanted it from the front, you should say, and slightly to the side.
75
Oh, that’d be nice.
76
Yeah, sometimes it can think of stuff you don’t think of.
77
Moves through a crowd, smiling warmly and saying hello to people as he passes.
78
He gestures casually, radiating friendliness and positive energy.
79
Scene is bright cinematic, filled with billboards, taxis.
80
Here’s the scene, what’s in there, and here’s what it’s shooting on.
81
35mm lens, smooth handheld, steadicam motion, gray.
82
Lighting is natural daylight, realistic tones, inspired by urban lifestyle commercials, feel
83
good YouTube videos, ultra realistic.
84
Let’s get this prompt and let’s put this in here.
85
Now the first thing I want to do is prompt that in right there.
86
Make sure I go to my settings.
87
I want this 10 seconds, that’s fine.
88
Orientation, I want this landscape, not portrait, and then click right here and run that.
89
Now if you’ll see what’s happening here, you’re still back on the screen, so I could
90
still prompt for another.
91
If I go right here, you see it’s spinning right there, and this is my profile.
92
It’s actually spinning and working, although it looks like it’s an old draft right here
93
for that other video I showed you in the last one with Jake Paul boxing.
94
It’s actually running the draft generation for this in the background as it works, and
95
you can see it loading.
96
Very quickly, I’ve maybe been waiting 30 seconds or so right now.
97
It depends on server time, maybe anywhere from 30 seconds to a couple of minutes to
98
generate a video like this.
99
Now that’s finished creating.
100
You can see right here on my profile still is here.
101
If I click over onto here, I can see all my drafts. That’s great.
102
Let’s click on that one.
103
This is the next guy.
104
Let me just turn that up.
105
Hey, how’s it going? Good morning. What’s up, man?
106
Have a good one. Appreciate it. Take it easy. Nice.
107
Okay, let me just pause that for a sec.
108
So I’ve got the shot.
109
Now it didn’t adhere to the prompt exactly.
110
You can see right here it says, follow him from behind.
111
It followed him from the front.
112
Is it from the side? Maybe slightly.
113
He’s overly friendly and he looks at camera.
114
You could be prompting not for that to happen.
115
He is very realistic.
116
He’s talking and speaking, saying, hey, how you doing?
117
Nice to meet you.
118
Stuff like that.
119
It’s really, really nice.
120
You can see the watermark here.
121
Again, with the plans and everything’s changing so quickly, you’ll see it on your plans.
122
You can have that not accessed on your plan, depending what plan you’re on.
123
And then what else is it?
124
It looks really…
125
Let me play it one more time.
126
Hey, how’s it going? Good morning. What’s up, man?
127
Have a good one. Appreciate it. Take it easy. See you around.
128
Handheld, steady cam as it follows back.
129
He’s definitely in the jersey.
130
Even the Nike take and the NBA logo says New York here.
131
I didn’t prompt him to have brown hair, for him to have blonde hair, for him to have red spiky hair.
132
Those are extra details you would need to add in here, but a really good generation.
133
Really, really nice.
134
So I can click right here and I can download that or delete it, and I can, of course, post
135
it onto my feed for people to be able to see.
136
So a really nice example.
137
If that wasn’t exactly what you wanted and you wanted from behind, I would emphasize
138
this and re-prompt again.
139
Just say it more than once or put it at the start of your prompt.
140
And that’s just kind of what AI video prompting is all about.
141
If you were creating this for a scene in a little movie you were making, you’d have to
142
probably do multiple prompts to get that.
143
So let’s go back again.
144
Let’s go back and let’s prompt for that.
145
Doesn’t matter which screen you go to.
146
Now let’s do another one right here.
147
I want to, I’m going to do two at a time.
148
I’m going to add in an image right here.
149
Now here’s an image of Times Square.
150
I’m going to say I want a prompt for a 30 year old Asian woman to be walking in this
151
scene in Times Square.
152
Use the image for style reference, wide shot.
153
Create a prompt for Sora 2.
154
I will upload this image in Sora 2 also.
155
When I do this, I’m going to upload it right here.
156
So I want to create the prompt for this.
157
Once again, we are using chat GPT to create our perfect prompt for us.
158
And it says, yep, exactly like that.
159
I will do a 30 year old Asian woman walks confidently through Times Square, bright cinematic,
160
wide shot, color tone, referencing images, and it’s giving me more lighting and things like that. Perfect.
161
So let’s just copy that over.
162
Let’s go back to Sora.
163
I’m going to make sure that my settings are still 10 seconds in landscape.
164
Of course you could be changing that.
165
I’m going to paste in that prompt and upload the image.
166
Now that’s uploaded and it’s in there. Let’s hit run.
167
Now it’s added to the queue.
168
While I’m waiting, let me also show you the last thing you might want to do.
169
Let’s grab someone like Sam Altman.
170
If you’re on iOS or you’ve created it already, you could be able to add yourself, but I’m
171
going to say Sam Altman, just a really simple prompt right here, Sam, so I’ve tagged him.
172
You can either add tag or just click.
173
If you know there’s someone else, then you can add them of course with the app.
174
So I could go at whoever it is, if I know they’ve got an account and they’ve got a cameo to be used.
175
It’s at Sama driving a convertible flying car through London.
176
As simple as that.
177
Obviously I could give more details.
178
I could put this into Sora and I could prompt like we have done before and I could be changing
179
this if I wanted it for social media, but let’s just keep this at landscape and let’s run that also. Okay.
180
That’s finished my first generation already.
181
Let’s go and click that.
182
By the way, if you hover over any of these, you’ll hear them start playing.
183
Let’s click on this and let’s watch.
184
Nice. Okay.
185
So this doesn’t have any speaking because I didn’t prompt for it obviously, and it has
186
this kind of background music and noise of the city. Really good.
187
It opens with the shot.
188
Let me mute that for a second while I play.
189
It opens up with the shot that I originally gave it to set the scene, which is really nice.
190
And then you’ve got this woman walking because I gave it this reference image is walking
191
down the middle of the street.
192
You could of course prompt for the sidewalk, but it’s walking down the middle of the street
193
right there and it looks really nice. The walks good.
194
She said, we said confident in the prompt and her shoulders are going back and forward. Very confident.
195
Look, you could reprompt if you didn’t want that.
196
So this looks really nice. Really good.
197
Let’s see if the other video has already finished.
198
Oh, just finishing now the cameo and now I can see that’s finished right here.
199
Let’s click on this and have a look.
200
So once again, this was Sam Altman in a flying car through London. Okay.
201
Let’s have a listen.
202
Okay.
203
That’s really nice.
204
It’s look at this.
205
So I don’t know whether the car is super realistic, but that’s like down to you depending.
206
The scene is there’s a London bridge there and there’s the shard.
207
There’s the London eye.
208
When he’s speaking, he calls it the Tams instead of the Thames, I think.
209
But maybe that’s just an accent thing that looks really good and realistic.
210
I didn’t give it any prompting for what to say.
211
This was a really quick prompt.
212
You would need to prompt in a lot more detail for if you wanted some specifics like that.
213
I didn’t describe the car.
214
I didn’t do anything like that.
215
This was a really nice video.
216
So I’ve got some good examples here that I’ve shown you.
217
So to use these, once again, I could, if I click on this, I could just download right
218
there and then it’s in my downloads.
219
Let me open that up and show you.
220
Here’s the video we have of the girl walking through Times Square.
221
Yeah, really, really nice.
222
So the quality depends on the plan you’re on, how much you’re using your plan, and it’s
223
all going to change soon, I’m sure.
224
So that was showing you Sora 2, a really good tool to be using.
225
Very realistic, really nice movements.
226
Very responsive to your prompting.
227
So Sora 2, I’ve got Sora 1 included and you can compare these.
228
This is leaps and bounds above Sora 1, I think, but they’re still included. Really nice.
229
Probably a tool you’ll want to be using if you’re creating AI video, depending on your
230
plan and what they’re going to look like.
231
And if I was using this for creating short films and things, you could easily create
232
whole scenes and put these together to make short movies, films, adverts, commercials,
233
whatever it is that you wanted, and the quality is good enough for sure.
234
A really, really great tool.
— Sora 1 Overview: AI Video —
1
So, probably for a lot of you, the lectures you’ve been waiting for, alongside Runway
2
as one of the most amazing AI video generation tools, there’s also Sora, which is by OpenAI,
3
you can tell by the logo here, the same as ChatGPT and DALI, which we’ve explored in
4
this course. So, over the next handful of lectures, we are going to be looking at Sora.
5
Now to access this, it is Sora.com and you can log in with your ChatGPT login, Gmail,
6
etc. If it is busy, sometimes you can get a notification come up saying that it’s not
7
accepting people right now, it’s too busy. Just wait the next day or something, you’ll
8
probably be able to gain access. If you go over to our site, there is the link once again.
9
And I need to remind you that if we go back to the aivideo.schools.ai prompting, if I
10
scroll down, there’s AI image generation tools and Sora, there’s a whole section here about
11
prompting for Sora, because yes, you’re going to notice in a minute, Sora has text to video,
12
which I say never do, but if you’re going to do it, it’s amazing in Sora, text to video
13
and image to video also. So, prompting is perhaps somewhat more important because you
14
can prompt almost like you are generating an image. So, some of the same techniques
15
that we’ve used inside of Mid Journey, you can see you need to say lots of things like
16
about the characters, the settings, mood, style, which you wouldn’t have to do when
17
we used Runway and we were importing image and doing image to video. And you wouldn’t
18
with image to video also. But get access to Sora, come on over, you’ll come across a board
19
looks just like this. You’ll probably end up over here on the right, you see featured.
20
This is a bit like in Mid Journey, the Explore. And you can see all the amazing videos that
21
are being featured here by Sora. I like this one. This looks cool. Okay. And there’s some
22
like clay motion stuff and I can see the prompt right there. So, if I go to any of these,
23
I can come back out. I wanted to look at this one. Tiny antlers, fluffy creature, round
24
belly, spinning motion. Okay, like a homemade up animal and the camera motion looks great.
25
And that fur looks incredible. I’m going to give you the answer right here. Sora is
26
incredible. It is at least as good as Run, which I use predominantly because you’re going
27
to see some of the drawbacks with this, which aren’t in its creative ability. The creative
28
ability with Sora is probably the best, I would say the best on the market and will
29
probably remain that way. I think it is absolutely outstanding. It’s really good. Okay, so feature
30
paid. I’m getting carried away now. And you can go to recent. You can see recent generations,
31
which are not featured by Sora. They haven’t been selected. They are completely random
32
and you can see lots of different stuff on here. The good thing about both of these in
33
the featured and the recent is if you like something, like I really like this one of
34
a sheep right here. Okay, well, I can save it right there. I can come back out. And if
35
I saved it, it’s now here in my saved. So you can go along and you can just start storing
36
up things you like and then you can remember what prompts they’re using. You can see this
37
is a very generalized prompt close up of a sheep eating grass. And it’s done this with
38
multiple different shots and cuts, all this stuff, which is amazing. It’s really, really
39
great. It’s definitely not its ability to generate videos you’re going to see. The downside
40
is right now what I think is their price. Let me show you. So I have a chat GPT plus
41
that’s $20 a month, which allows me up to 50 videos generation 1000 credits up to this
42
resolution because you can actually do 1080. I’ll show you. And then if I do want to have
43
up to 500 videos, 10,000 credits, unlimited relaxed videos, if I was doing it slower,
44
we saw that in other tools. And then up to the highest resolution. It’s $200 a month.
45
So no, it’s not for everybody. If you’re going to do this full time and 500 videos is enough.
46
If you’re going to do this full time, then absolutely. So it’s catered more towards the
47
professional user or companies as opposed to the general user. And perhaps if you’re
48
just getting into this, if you’re a beginner on the course, you might want to go with runway
49
first. If you really get into this, then you can upgrade to Sora or test it out for a month
50
or use this, have the $20 a month and limited and go between the two plans. That’s the biggest
51
drawback with Sora right now. It’s the cost for their subscription, but it is really good.
52
Let me show you. So to generate these really simply right down the bottom here, you’ll
53
see is the generation and prompting bar and tool. So here’s where you could type in your
54
prompt and we can go through some of these, I guess. Let me just do now. I’ve seen that
55
sheep eating grass kind of got me in the mood for something like an animal. So let’s do
56
a cute puppy eating his toy ball. Okay. And let’s prompt that. Now, that was not a very
57
good prompt. It’s not a very detailed prompt and I’m giving Sora a lot of license here
58
to make up what it thinks that I want. Once again, over on the prompting, here’s how I’ve
59
broken it down to five points. What I think makes the best prompting for Sora, but you
60
will get something with everything. So let’s go over to all videos over here and I can
61
see it’s generating right now. Whilst we wait for that, let me show you some of these
62
other tools at the bottom here. If I go to plus, this is where of course I can upload
63
an image or choose from my library. So we’ll get into that after I’m going to show you
64
text to video. Like I’ve just shown you there, we do a lecture after this on text to video
65
and image to video separately. So next right here are styles. So there’s lots of things
66
like balloon weld, film noir, stop motion. I can go to manage also, and we can have a
67
look at lots what these mean. I can also add my own, which is pretty good, my own preset.
68
So if I wanted to have one called, let’s call it Pixar, and I could describe what it is,
69
what it looks like, et cetera. I could have Pixar style in my prompting. So if you are
70
doing your own project and everything was to be the similar style, maybe you want the
71
Pixar style animation or a certain look and feel, you could set up your own preset right
72
here and then everything is going to be created to that preset, which is really good. And
73
then you can also do 480, 720 and even 1080p as you saw in our plans that we have at the
74
start that’s available should you go on the $200 a month plan.1080p, amazing. Obviously
75
it takes longer as it says here to generate, but it’s 1080. Then on the next, we’ve got
76
the duration, 5 seconds, 10 seconds, 15 seconds and 20. This depends on your plan and also
77
on the quality that you have. If you had 1080, then you cannot do a 20 second generation.
78
It will be limited like that. Next one is the number of variations you’re having. See,
79
I’ve got it set on 2, 1 or 4, a bit like when we were playing with image in mid journey,
80
you can choose the number of generations that you want per time you add video prompting.
81
Now there’s also storyboard, which is really nice feature. You’re going to love this for
82
creating AI video, but I’m going to have a lecture on that in a couple of time. I think
83
it warrants his own lecture for sure. Let’s take a quick look at these videos. If I come
84
up here and I just scroll left to right, it’s going to start playing this. Now he’s not
85
eating his ball. He’s playing with his ball, trying to bite it for sure, but that’s okay.
86
Let’s have a look at this. Okay. Slow motion. Look at the ripples in his cow with fur.
87
That is really nice. Normal number of fingers, slightly slow motion for sure. There’s also
88
another puppy in the background. I see there a Teddy looks like. Let’s have a look at this
89
next one. Oh, it made a, see, saw is not perfect. No generations are perfect. And if you’re
90
doing text the video, then it’s definitely less perfect. And if you did image to video,
91
it just made up a second ball right there right now. But this fur does look incredible.
92
His movement is great. Just like a real dog would be that’s incredible movement. So it
93
is good, but nothing’s flawless and you would have multiple generations of that. That’s
94
everything for the layout. I haven’t generated anything apart from this just quickly to show
95
you, but with a proper prompting, we’ll do that in the next lectures because yes, we’ve
96
got text to video. I want to show you and image to video with Sora. I’ll just show you
97
some things I was doing the other day when I was testing. Do you remember we had that
98
drone shot and I wanted to see what Sora would do with it. So this one doesn’t move, but
99
the drone does wobble very much like a drone would do. And this one, the shot moves with
100
it focusing on that drone. As the camera goes across, very realistic, really nice and no
101
warping on here. And then a man walks through New York city. I think that was the prompt
102
a man in his forties wearing a suit walking through New York city. We follow him from
103
behind. So you can see, yes, very nice. And this was walking straight down the middle
104
of the street through a market. It’s made up, but look at his hands walking side to
105
side and his footsteps or a little jump there. So not perfect, but really close. Let me look
106
at the legs on the one on the left. Walk, walk a tiny, they’re pretty, really good.
107
This is from, this is from text to video, which is always incredible to me. All right.
108
So that’s the layout. There are other small things like here’s your favorites. Like I
109
mentioned, here’s your upload. So when I uploaded these images, I believe this one, cause I
110
want to test myself, the drone shot and stuff, folders and everything. You can also filter
111
when you’re looking for things. Like if I was in all videos here, I could filter by
112
what it is I’m searching for. And that’s pretty much the whole layout of Sora. So now you’re
113
on Sora, you’re in there, you understand the layout, where things are, possibilities.
114
Let’s do a real quick lesson on some prompting things you have to think about. That’ll be
115
a minute maximum, but just so you have it. And then let’s do some generation. So we do
116
it with text to video and we’ll do a few shots that I’ll compare directly to image to video.
117
And we look at these side by side and see which results are better. This is exciting.
— Sora Prompts: Get the Best Results —
1
This will just be a minute lecture or so just so you’re aware of this and some things you
2
want to think about because in the next two lectures I’m going to prompt both with image
3
and just text to video for both of them and there’s a slight difference needed and there’s
4
some discussion needed because it’s Sora and we have those two options now over on AI video
5
dot school slash AI prompting where we haven’t been since the prompting section earlier in
6
this course if I scroll down to here video tools if I go under Sora let me just draw
7
your attention to this so as I mentioned Sora obviously gives you the option to do both
8
text to video and image to video with text to video you need to think about some more
9
details obviously like we did with AI image generation but when you’re doing image to
10
video then you don’t need to because you’re already giving it the information in image
11
so here’s some key guidelines these five points really when prompting some of them you will
12
not need if you’re doing image to video but you will if you’re doing text to video so
13
if it takes the video remember to include essential vital descriptive elements such
14
as so the characters their appearance their clothing their age features personality even
15
behind them just like you were doing when we were generating images setting the environment
16
with intricate details like the decor or the weather mood and style define the atmosphere
17
or artistic direction we’ve done the whole section on styling here so you’re familiar
18
with that camera movement specify angles or techniques remember I had earlier in this
19
section where we were talking about things like the lenses or the movement and things
20
like that and then shot type is this a wide shot a close-up you obviously won’t need to
21
tell it the shot type perhaps if you’re uploading an image you won’t need to tell it what the
22
characters are wearing if you’re uploading an image but you will need all of these if
23
you’re doing text to video so I’ve got some examples right here and then there’s also
24
a balance to be had between brevity and detail so if you give it less like you saw me with
25
the puppy example in the last lecture it will just interpret how it wants to and add its
26
own creative element to it sometimes that’s amazing and you can get the most amazing results
27
things you didn’t even think of but give it a detail prompt you’re going to get closer
28
to what’s already in your mind if you have something that you know you want then be more
29
detailed now it’s going to struggle just like we did with mid-journey trying to do something
30
really abstract for example a dragon exploding into fireworks made of water now it has to
31
understand what a firework is the fireworks made of water and the dragon is exploding
32
into fireworks made of water that’s way out there but you could keep trying and trying
33
and trying but it’s going to be very difficult but a dragon flying over a moonlit forest
34
is going to be far easier because it knows what a moonlit forest is it knows what its
35
scales are that are glistening it knows what a dragon is and you’ll be able to put those
36
things together because they’re real now you can use because it’s a chat GPT and you can
37
be both conversational you can add emotional adjectives and things to this like a tranquil
38
lake and it will respond to it whereas you know where we’ve used other models I’ve said
39
just give it direction tell it instructions as opposed to emotion or extraneous details
40
but that’s okay with Sora and once again don’t forget your cinematic language camera angles
41
lens types and technique etc if you’re uploading image you need it less but obviously if you’re
42
doing text to video then you will do and always add your style in here I always tend to do
43
this even if with Sora if I’m uploading image I still quite often mention whether it’s black
44
and white film noir or cyberpunk or something because sometimes it adds its own stuff and
45
cuts away and moves around so is quite inventive by itself so I just do that just to make sure
46
that anything else it adds is also to that style so if you need the stars reminder once
47
again it’s over here on AI video dot school slash styles now let’s get in and play with
48
Sora let’s do some text-to-video
— Sora 1 Text-to-Video: Bring Your Ideas to Life —
1
Now, let’s get into it. Let’s go and play with Sora. I want to do both text to video
2
and image to video to compare these side by side. And yes, you haven’t seen me do text
3
to video and you won’t see me in other tools that we’re looking at here because it isn’t
4
good. If it exists on a platform, which some platforms have even removed it, it’s not great
5
and it’s not a great way to ever have consistency among shots and things. Sora is a bit of an
6
exception here because it’s actually quite good at text to video. But I’m still a fan
7
of image to video for sure. And I’m going to show you why. So I think the best thing
8
we can do right here inside Sora is I have a lecture right now showing you us prompting
9
for say four or five images to tell us a bit of a story, different types of images. And
10
then I’ll do the same thing with images that I’m going to turn to video that I’ll have
11
created off platform on mid journey, I think I’ll probably use. So if I have us five shots,
12
let’s think about this. If I have an establishing shot of London and then do outside of an office
13
in London and then a woman at her desk in an office and then a close up of a woman’s
14
face, those four shots could tell us a story as if we’re starting. We’re in London or outside
15
an office. This must be inside the office, this woman and then a close up of the woman.
16
This story is about this woman. So I’m going to generate those four shots with text to
17
video and then we’ll do the same thing when I’ve already have the images in the next lecture
18
and we can compare these side by side and have a look and see what’s better. So I’m
19
going to use over on a video dot school prompting once again. I’m going to go back to his had
20
a prompt for Sora. I’m going to use the guide. I’m going to use these right here. I’m going
21
to copy this. Actually, I’m going to paste this in and you can use this also. And I’m
22
going to paste it in to remind myself what it is I want to do. So I’m going to fill this
23
out. I don’t need any characters in my first shot. Remove that setting. So here’s what
24
I’ve got. And I’ll erase these bits. So the setting is the city of London. Bright, sunny
25
day mood and style. I’m going to say it is hyper realistic camera movements. It is a
26
drone shot over the city. Let’s add that in and then a shot type. I don’t need to on this
27
one because I’m telling it is a drone shot over the city. So I want my aspect ratio to
28
be 69. I don’t need to tell it any style. I’m telling it hyper realistic. Let’s go for
29
720. I want to have myself two versions of this. Yeah, we do two versions and five seconds
30
is plenty for this. So let’s generate that. Now that’s in a queue. Whilst that’s there,
31
I’m going to generate the next shot. So my next one outside an office in London, UK. Bright,
32
sunny day. Remove that bit. Mood and style. It’s hyper realistic camera movements. The
33
camera is static and then shot types. Wide shot with the office in the center is what I want.
34
OK, let’s generate that. Now my next shot does have a character in it. So I’m going to remove
35
this. A woman in a red dress is sat at her desk in an office working setting. The office is modern,
36
many desks and people bright, open mood and style. Hyper realistic. Let’s remove that.
37
Let’s remove the setting and then camera movements as a medium wide shot. See the
38
woman clearly and slowly zoom towards her. That’s quite a lot of detail. Let’s remove
39
that. And that also includes the shot type. So let’s also add I want to add in in a red
40
dress is sat at her desk. Oh, no, that’s probably right. OK, let’s generate that.
41
And the last shot I want to do characters. Here we are. A woman in a red dress. I want to say
42
worried. I’m going to say scared because often in AI that gives you a very really scared look.
43
So I’m going to say worried because I’m going to have her still in an office. OK,
44
and then mood and style still hyper realistic. OK, the camera movements slow zoom into her.
45
And then the shot that I want is an extreme close up of the woman’s face. So I’m going to tell it
46
that because I want to see a bit of an expression and worried on her face. Let’s generate that.
47
So they have all now generated. Let’s take a look at these. OK, if I once again just put my
48
scroll across, you can see these no problem, but I’ll make them bigger also. Let’s have a look at
49
this one. OK, yeah, the traffic seems to go a little quick and there is some morphing right
50
here. I’m disappointed about. But overall, if you’re not looking at that bit, the rest of it
51
looks good. These really look like London buildings. There’s the shard. There’s the
52
London Eye, which is normally next to the river in Westminster. But let’s not get too picky about
53
this. That does look good. Let’s have a look compared to the other one. This one more of a
54
fisheye. The river splits, which the Thames doesn’t do here. This close to central. This
55
bridge doesn’t go anywhere. So that’s not a great one. But I do like that movement with a fisheye
56
lens. Let’s go to the next one. This is an office. People are walking. They look a little bit quick.
57
There’s a London bus right there. And let’s have a look at this next one. This is a really nice
58
camera movement. When you’re far away, these people just look like they’re walking. It’s fine.
59
Nobody morphs into someone else from what I see there. Great. And this Stephen put in this guy
60
who just disappears in the corner. And these are the modern. These are the new London buses. They
61
look just like this. This is really, really good. All right. And then we’ve got the woman in the
62
office. This one’s the nice shot, I think. A slow zoom into her. Got enough fingers. Yeah,
63
is working. You probably cut like here anyway, but that’s really nice. That’s a great shot.
64
Does is the screen? No, I think it’s OK. All right. And then this other one, this is actually
65
a nicer movement. I think the person next to opposite our lead is balancing their laptop
66
or nothing there. I think it is. But apart from that, and then asked for this to be a close up
67
of a woman’s. Oh, so that shot, I’d maybe use that bit here. You’ll see this in sores sometimes
68
when you create one shot, it just adds its own second parts. And then you can use something
69
called Rika here, which is fine. We can do that later. I would just use this part of the shot.
70
But let’s have a look at the other one or instantly quite a bit of morphine, a lot of movement.
71
I mean, it was dramatic movement, really nice movement. If you have camera system there,
72
but not great. So our story goes, if I edit this together in post, you can see our story goes.
73
We are in London. We are outside this office block. And inside there is a woman who sat at her desk
74
and there’s something about to happen. And she’s extremely worried about it.
75
OK, so you can see what it would look like if you were doing text to video inside. So nice
76
generations. I’ve only done two iterations of both of these. You’d obviously do more and more
77
generations. We know from all the tools we’re using, it takes more than this, especially in
78
the image section in the last one. You saw how many I was generating there. So that was text
79
to video. But let’s do image to video to do a little bit of a spoiler here. Let me just
80
draw this on screen. What I’ve done is and I’ll show you again for the next lecture is I’ve used
81
mid journey to generate with exactly the same prompts. The city of London, bright, sunny day,
82
hyper realistic drone show for the city. And I’ve selected one from each of these. I’ve only let it
83
generate once except this shot because we didn’t see anyone. But I don’t think we did here. So in
84
a very fair way, I’ve let it generate once I download these and then we’re going to use that
85
inside Sora for image to video next.
— Sora Update: Now Upload Images of People —
1
So, this is an update lecture, as I promised, as things change inside all the AI tools,
2
but here inside Sora, as things change and update, I will add it. So, I’m just slotting
3
this lecture inside here. I’m going to actually keep the next lecture where I’m complaining
4
because it was very difficult to upload a person. In fact, previously, and you’ll see
5
me complain about it in the next lecture, there was a blanket ban on uploading people
6
images. You couldn’t upload externally. You could create them inside Sora, but you couldn’t
7
upload external images. So, it made character consistency really, really difficult, but
8
I’m going to keep the next lecture because apart from that, I show you how to continue
9
with our little project where we’re making that scene. So, we’ll keep it in there, but
10
just ignore the part where I’m talking about how you can’t upload people and whether I
11
think that there’s going to change soon because it has changed and now you can upload people
12
to this. So, and also, I also say, is this going to be on the basic plan or the more
13
expensive plan? I’m right now on the 20 bucks a month, I think it is basic plan, and I’m
14
able to upload people. So, yes, you are able. There are limitations with things like the
15
resolution you can do inside this cheaper plan, but you are able to upload images of
16
people, no problem now. So, here’s images of me, for example, I was playing with there
17
and there’s some of me also. So, the way to do this quite simply, and then we’ll get on
18
with the next lecture where I show you about the next stage that we were doing, continuing
19
with our little project. If I just hit the plus icon right here, I can either choose
20
from my library, this inside here that you’ve created or upload from my device. So, I’m
21
going to do that. So, let’s upload this image of me right here. And then quite simply, depending
22
on what resolution you wanted it, perhaps I want it as a square as it is, perhaps I
23
want it as 16 nine, and then I can say I want this bit, this bit, maybe I’ll get all the
24
up to the top of my hair in there. And then I can say this man sat at desk, waving and
25
smiling. Again, you can do more with the prompting as we’ve been discussing, but to show you
26
this and we also get on to other things storyboard and that afterwards. So, let’s upload that
27
and take a little look. Now that’s finished. It actually didn’t generate a full form, which
28
is strange. I did have four versions selected. Maybe that’s coming and loading shortly or
29
there’s a bug or something and you’ll see that it’s got me right here and I moved not
30
that one, that one I definitely wave and come in that one on behind my desk and I wave what
31
I find with saw is unlike other AI video tools like mid journey or other things. It kind
32
of takes your takes your image. It renders it understands you are a person in the setting
33
and then it gives you some really like different kind of shots, which is good. In some cases,
34
you might want that. And if I’m using more than one AI tool, this is great that it gives
35
me look at this shot. It’s actually really nice. This top one zoom in the camera. It’s
36
really nice camera movement. And then I would definitely wave and it looks just like my
37
image looks like just like me. It does look like that. So start on the image comes out.
38
There we go. You see that? And then if I hold on to this one, you can see the same thing
39
right there. This one starts from like left of frame and comes in. This one’s made a woman.
40
So it’s just taken something completely different. You see, it’s like had me then it went over
41
to me with a as a woman with a beard. There is it. And then I’m waving. This one’s my
42
favorite over the top left right there. It’s slightly different to perhaps what you’re
43
going to get on other AI tools, but you can use people it’s going to be difficult to get
44
exactly what you want, perhaps in comparison to some other platforms. But some people really
45
love this. And depending on what you need it for, if you need a specific shot in a specific
46
way, then sometimes Sora doesn’t give me the best results. Other tools do. But you are
47
now able to upload people into Sora, which is great for consistency. If I need this person
48
and need multiple shots of this person, I’m able to upload those and keep the character
49
consistency from an external source. Previously, I could use them. And as we’ve seen, create
50
the images inside Sora and animate those with video. But now I’m able to upload externally.
51
Make sure you’re there’s an ethics video in here. Make sure you have permission to upload
52
this image and the person that you are working on animating in the video. So, yes, in the
53
next lecture, follow along with exactly ignore the part where I say you can’t upload people
54
and complain about that because you now can. And on the cheaper plan also. Just so you
55
are fully aware, you can do that. OK, let’s continue on with the course.
— Sora 1: Image to Video Explained —
1
Now on to image to video, which I
2
think is going to be, as you know
3
from Runway in the previous sections, it’s going
4
to be our main way to generate images
5
because we want consistency across our story, etc.
6
And I think they just generate a better
7
video overall with the detail you’re giving it.
8
So we’re going to compare these.
9
We saw in the last lecture, we had
10
this project here, we’re setting up in London,
11
we’re outside this office, there’s a woman at
12
her desk and she’s very concerned, although these
13
two shots didn’t go that well.
14
So now because we’re going to do image
15
to video, I’ve used the exact same prompts.
16
You see, once again, if you click here
17
and click on the prompt, you can see
18
it here.
19
I’ve taken those exact prompts and I put
20
them into mid journey and I’ve generated these
21
images.
22
So here’s our first one, City of London,
23
bright, sunny day, hyper-realistic drone shot over
24
the city.
25
And I chose this one to download.
26
Here is outside an office in London, UK,
27
bright, sunny day, hyper-realistic, camera is static,
28
wide shot with the office in the center.
29
I chose this one, that’s a nice London
30
office.
31
And then for the inside, we had a
32
woman in a red dress sat at her
33
desk in an office space.
34
The office is modern, many desks and people,
35
bright, open, hyper-realistic, medium one shot, wide
36
shot of the woman clearly.
37
I chose this one here.
38
And then I wanted her looking worried.
39
This looked like a computer game.
40
This was definitely worried, but I chose this
41
shot here.
42
So these are all downloaded.
43
So what I’m going to do inside Sora
44
is I’m going to upload these right here.
45
And the only direction I’m going to give
46
them, if I go back to my prompting,
47
I’ve already done my characters, already done my
48
setting, it has my mood and style.
49
What it doesn’t have is my camera movement.
50
So the only thing I’m going to give
51
Sora is my camera movement.
52
So if I push the plus button, I
53
can go upload image or video.
54
There is my first image I want to
55
generate right here.
56
Once again, I’m going to do it five
57
seconds, two versions of this, which is in
58
one, which is nice because in one way
59
we get one and have to regenerate.
60
Two versions of this, 69, okay, let’s do
61
that.
62
Now our next shot is outside the office
63
right here.
64
Now I’m not going to, I didn’t give
65
that any direction last time, I’ve got two
66
versions of it.
67
I think because it’s a high wide establishing
68
shot, it’s going to give me movement over
69
the city.
70
If not, then I should have, but I
71
think from when I’ve been playing with this
72
before, it’s going to give me movement and
73
I want to give it that freedom.
74
Here I’m going to say very slow zoom
75
in.
76
That’s what I’m going to say, 69, 9
77
.20, five seconds, two variations, go.
78
Our next one is the woman at the
79
desk and for that I’m going to give
80
instructions with a character.
81
I’m going to say slow zoom into woman
82
sat at her desk in red dress.
83
Now what I’m worried is Sora does love
84
to make stuff up, so I’m hoping it’s
85
not going to make up another woman in
86
a red dress and it registers that this
87
is a woman in a red dress.
88
Runway would, it definitely would, so I’m going
89
to put this to the test.
90
Once again, 69, 7.25 seconds, two versions,
91
go.
92
The last one is our woman looking very
93
worried.
94
I’m going to say slow zoom into worried
95
woman’s face.
96
I’m going to say that because it’s probably
97
going to add some more movement to her.
98
What Sora does is obviously generates, it’s such
99
good with movement, a whole 3D character and
100
then it moves them.
101
So I’m going to tell it it’s a
102
woman who’s worried so that when they register
103
this, they keep that expression on her within
104
this shot.
105
Let’s upload that.
106
Now I’ve uploaded all these images and here
107
I’m going to show you a problem with
108
Sora.
109
Yes, we already went over the last one
110
that you need a pro account at 200
111
bucks and perhaps that solves the problem with
112
this, but I’m not entirely convinced that it
113
does.
114
Now I’ve uploaded these two images, but currently
115
with Sora, and I’ll update this when it
116
changes, there’s an issue with a lot of
117
things here with regards images with people.
118
So yes, when I upload our shots to
119
compare with people, so I’m not even going
120
to be able to do a direct comparison
121
shot for shot.
122
You can see how nice these are and
123
you saw with text the video, but hey,
124
it says error.
125
There’s people in this.
126
If I double click this, it says you’re
127
not allowed to yet.
128
Your account currently is not supporting creating videos
129
with uploaded media with people in it.
130
Okay, fair enough.
131
What if I had a pro subscription and
132
paid $200 a month?
133
Well, actually a lot of people are saying
134
they don’t even get to use with people
135
there and it’s only a select subset of
136
people that are allowed to do it.
137
Some people with a pro subscription are experiencing
138
the same thing with Sora from yesterday.
139
Now Sora has not long launched, so it
140
could be having the problem where it has
141
too much information and is covering its back
142
and doesn’t let anyone generate images of people
143
unless they’re in this subset of people.
144
Right now, I’m unable to generate even with
145
a pro plan images of people, which is
146
a huge, huge setback, but that’s obviously not
147
going to be the case for everyone and
148
on this platform at all for a long
149
time because that’s a huge thing with AI
150
video that people want.
151
So by the time you watch this video,
152
you’re going to be able to generate with
153
people, of course.
154
Of course, you’ve seen we did it fine
155
and flawlessly inside of Runway and it was
156
great.
157
So yes, Sora is going to allow you
158
to.
159
And obviously this is Sora OpenAI, so let’s
160
ask chat GPT themselves.
161
I explained, hey, I’ve only got the basic
162
package right here.
163
If I upgrade, will I get access to
164
be able to generate with people?
165
And right now, as of today, like I’ve
166
said, this will probably be gone very soon
167
as the answer is about to explain OpenAI
168
Sora AI video generator currently restricts the uploads
169
of images featuring people, including self images to
170
prevent misuse such as deepfakes and unauthorized impersonalities.
171
The limitation applies to both chat GPT plus
172
$20 a month and chat GPT pro at
173
$200 a month subscriptions.
174
At launch uploads involving people are limited, but
175
OpenAI plans to roll out this feature to
176
more users as they refine their deepfake mitigation
177
tools.
178
So as the tools increase, I’m surprised they
179
launched this with this because this will put
180
off a lot of people, but it launched
181
so you can do everything but people and
182
they do look great.
183
Well, we’ll obviously take that away.
184
Like I mentioned, as it says, therefore, if
185
you’re on a basic plan like the one
186
I showed you earlier and you’re thinking, hey,
187
I’ll upgrade this to $200 a month and
188
I’ll be able to generate people.
189
It’s not necessarily at the time of recording
190
going to immediately grant the ability to animate
191
images of people.
192
OpenAI is actively working on expanding this feature
193
responsibly and will likely provide updates as they
194
enhance their safety measures.
195
For the most current information on the feature,
196
then you can check out obviously any of
197
this associated press and any of the links
198
that they have right here.
199
Let me show you these other images with
200
Sora.
201
So here’s my image in London.
202
This is really nice.
203
That’s nice there.
204
They always do multiple images of the Gherkin,
205
which is our, see here, there’s four of
206
them on that shot.
207
There’s only one in London, but all AI
208
images seem to do multiple.
209
So if I search across, there’s not any
210
movement, but it’s like the morning, and then
211
we come into London.
212
Yeah, really nice.
213
And this one also where the clouds break
214
and it’s the morning.
215
That’s a really nice shot.
216
Let me look at that close up.
217
That is really nice, like a time lapse.
218
Really nice.
219
And then this image, I asked it to
220
zoom in very slowly.
221
And from what I can see, there’s no
222
movement at all.
223
The wind, the trees are blowing in the
224
wind.
225
That’s really nice.
226
And the next one, also no zoom in,
227
which was annoying.
228
And then we have this people issue.
229
So yes, Sora has great images and can
230
even do text to video.
231
But right now at the time of recording,
232
there is this bizarre issue with people and
233
Sora.
234
And I’ve seen people generated inside Sora, and
235
you can see it in Featured right here.
236
Lots of images of people, and I’ve seen
237
loads of people do it.
238
It is available, but whether or not Sora
239
is overrun with these images and problems coming
240
up and they’re restricting it and allowing people
241
with a pro plan after a certain time
242
with some trust, some karma, unsure.
243
By the time you watch this video, you’re
244
probably going to be allowed to generate that
245
no problem.
246
But I have to show you this issue
247
coming up, because if you watch this and
248
you saw me generate a person and then
249
you have that problem, you’d think there was
250
something drastically wrong with your Sora account.
251
But that is a limitation.
252
So 200 bucks a month, nice images, great,
253
can’t always do people.
254
You can see why there are a lot
255
of images here that don’t have people in
256
it.
257
And right now, those are the downsides of
258
Sora.
259
But it’s going to get better and better.
260
Look how amazing that is with the movement.
261
It just needs to stop with the limitations
262
in price and make it more accessible, I
263
feel, and some of these other things with
264
images that even I can’t get on right
265
now.
266
Now there are a few other things I
267
want to show you, storyboard, and then some
268
editing things inside there, which are crucial, I
269
think.
270
And one of, I think, probably Sora’s greatest
271
assets that they have in here.
272
Let me show you that in the next
273
lecture.
— Sora 1: Editing Features —
1
Now a couple of things to show you inside SORA, now we’ve generated videos post with text to video
2
leader in that field and an image to video.
3
There are some other things with editing and storyboard especially I want to show you.
4
Now if I take an image like here’s a video that we did right there, let me open that up.
5
I’m going to show you some of the things and tools inside edit here before we jump into storyboard.
6
So I can do some things like I can edit the prompt if I want to that jumps up and I can edit and say
7
zoom this in pan turns to night change the duration aspect ratio whatever it is I want to do.
8
That’s great it has that option for video really nice.
9
I can also view it in story that’s what I’m going to show you in a moment.
10
Now I can recut this inside here.
11
Let me go on recut now you see if I scroll along remember this was a shot that’s really nice and
12
then cuts into turns into another shot.
13
Now the reason they have recut as a tool on here I think is because SORA loves to do this.
14
It loves to make up its own thing really well it does it really good it can give you lots of inspiration.
15
If I was making this project I’d probably even keep both of these shots because they’re really nice.
16
But I’m able to if I drag this along drag this along I’m able to now it finishes there and I can go
17
back to here so I can just get rid of that right now.
18
And now if I add this onto is now in my storyboard this is my storyboard I can edit the size of that shot.
19
There’s some other things on here like remix if I want to basically generate this again I can say
20
hey do you want to do a strong remix significant changes to the original video.
21
A mild noticeable changes subtle minor changes or custom I can set and remix the strength on here.
22
Which is now at strength 7 there’s a mild somewhere in between if I wanted to and I can hit remix.
23
Then there’s also blend if I click that I can choose from our library right here there was this one
24
shot we had and also this one.
25
So let’s just put these two shots together to show you what they look like actually maybe it’s
26
better if I use that shot.
27
So now these will blend together hey do you want this to transition blend like that or maybe I could
28
sample blend influence based on one to another mix it like merge the clips together or custom.
29
I’m going to say mix this okay and it just mixes like this this is the symbol for it let’s go blend
30
added to my queue right here let’s go back out and it’s at the top of all my videos.
31
And now here is ready click to watch here is my blended video from one over to here and it’s kind of
32
just done its own thing blended from here to another shot kind of like here.
33
That wasn’t what I was expecting at all and Sora does this let’s go and do that again with blend I’m
34
going to choose another video from my library let’s choose that one again but this time let’s go
35
transition blend and blend.
36
And now that second stitch has finished let’s take a little look at this one okay it’s like it’s
37
taking inspiration from the second image because it’s slightly more blue hue and just changing that
38
so it’s not blending it at all.
39
Sora has a mind of his own sometimes all right let’s go back to my video now actually I finished the
40
features I want to show you here I can click on there there was remix blend I mean there’s also loop
41
in there if you want to make a looping video so that it never ends.
42
Which obviously if I show you here you can do either a normal a long or short loop so the video just
43
loops onto the next to the next to the next which are great for social medias or ads because people
44
don’t know when they finish.
45
So they keep watching on to the next bit adds to your attention time people on social media love
46
them but that’s obvious what that is I don’t need to show you that the last bit I want to show you
47
is storyboard which is really good.
48
Now if I click storyboard right here it pops up with this I’ve got five seconds right here I could
49
change this if you had different plans or if I have this in for 20 I can go to 10 seconds right here.
50
And this is basically where I’m laying out a sequence think of this as your edit sequence of shots
51
now I could have there’s one and after two seconds it changes to this one after four seconds to this
52
six seconds to this eight seconds and maybe it finishes on another shot.
53
Now for each one of these I could upload an image if I want to so I could upload an image that I
54
have let’s upload that image of London right here and you’ll see what happens as soon as this
55
uploads look at this loading right there on the two second marker.
56
It’s going to automatically load in a description an aerial view showcase of spawning cities
57
cityscape with a river winding through it and then it describes it river dotted boats environment.
58
OK that’s great let me add in right here zoom in and then let me put right here a plane flies flies in the sky.
59
Now I could obviously go through and you can add another image you’ve generated so you could have
60
your images from mid journey you could have whatever you want put them all in here that two seconds
61
the three and this is a bit like in runway where we had our first and last shot but you can have
62
either images going from one to the next shot starting finishing or text description which is
63
obviously really really good.
64
Let’s go to create and now that’s done the cityscape at sunset where we’ve put our things together
65
through storyboard which is a really nice feature so let’s have a look at these if I scroll along.
66
Wow OK let me just open that up OK so understands a plane flies by that’s a really nice shot with
67
look at the curving there of the distance that drone shot it doesn’t use much of my first image
68
before it gets into the next shot I did set two seconds.
69
Maybe it’s completing by that two seconds but it’s given me a whole 10 second because my storyboard
70
was 10 second long and then at two seconds we have the plane came in remember this one right here
71
there zoom in and then nothing a plane flies in the sky that was it.
72
OK let’s have a look at the other one it generated right there and now this one wow it’s looking
73
like from the undercarriage of a plane as it turns around not sure if there’s two wings is that the
74
back one a bit long.
75
But I really like this first image right here now this is obviously a store is still relatively so
76
in its infancy of release and it’s amazing what it can do the generations are incredible and the
77
storyboard feature really nice and it’s going to get better and better.
78
Yes the images are really nice the videos are great the text the video is going to bring on a whole
79
new range of person that wants to create this for what we’re doing in course right now there are
80
cheaper options in total it’s cheaper to have a mid journey subscription and runway than it is to
81
have a soarer subscription for example.
82
It’s definitely got a place is going to be you can tell in these early stages of things are
83
releasing in the tools available in Sora is going to be the best if not perhaps it is the best at
84
video generation but getting what you want and your control is probably as important obviously as
85
the video it generates itself still good having great video if you can’t direct it and somewhere
86
where runway excels in your direction of video and where Sora excels in its quality of video.
87
Obviously with runway you need to give it the image you’ve created inside mid journey or somewhere
88
so yes it does have a place there are pros and cons between all of them and you decide which one is
89
best for you now let’s have a look some other tools now all different prices and capabilities for AI video generation.
— Kling 01 – Update Lecture —
Now let me introduce to you cling, O one, which has just been released.
This is a brand new multi modal supercharged creation tool.
Now the other lectures I have on cling a video are using inside here in video generation.
And there’s actually quite a lot of this you can do already inside there.
Once you generate the video you’ll see me add elements, multi elements and things like that.
But what they’ve done here with cling one is they have enabled it all in one workspace.
You’ll see me working right here.
I’ve got a quick example for you.
And then we’ll build one is you’re able to add in either images or elements scenes and put them all
together in one creation place.
So for example in this one I have this image which I uploaded an image of myself here holding this sword,
which was taken from the elements they have here in this scene in Tokyo.
And you just put it like this, I generated that and here is the result.
Here is the picture of me and I’m inside Tokyo here and I’m holding a sword.
That’s my face moves in on me using these elements in here.
So let’s build this together, shall we?
Let me cancel all of this and show you how to do it.
So to access it, you’ve got it up here.
Cling oh one and very simply.
And you’ll see when I use the AI video generator, uh, how similar this is into use, I can either
add images, video, if you want to add elements.
So you could upload your own video you have and add an element.
I do this in the video generation where I have, uh, the panda and change the instrument they’re using,
or a man holding an axe walking down the street.
But this is slightly better than that in the results that you’ll get.
So if you add an image, you can either click to add yourself.
I can upload the image here.
Image select from my history something I’ve generated before video upload or select from my history.
So you can just upload your own image here.
Let me just select one from my history and you’ll see I’ve even got my creative stuff I’ve created or
ones I’ve upload.
So let’s take, uh, yeah.
Let’s take me again.
Actually, it’s a nice, clear image.
There’s an image of me right here.
Confirm.
Okay, now, what do you want to do with this?
So I clear that that’s my previous prompt.
I can say this man and that’s now taking me at image comes next.
And what do I want it to do?
Uh, in this scene.
So I can now add either my own image of a scene if I wanted to, or if I click over here to element,
I can actually see what they’ve got under preset elements.
Here’s your elements.
When you upload your things in here, you can add to whatever sections you want.
Great for building out folders for doing this.
Or if I go preset elements, I’ve got all the characters here, any favorited trending animals items,
or if I scroll along here scenes.
So you’ll see this was a Tokyo one I used before.
Let’s instead put me in a futuristic setting right here.
So this man in this scene and I want to say holding.
And now I can come over to elements here and go items.
What am I holding?
In here arose motorbike.
Uh, this this kind of mythical thing.
The only trouble is, I’ve got quite a realistic me, quite a realistic setting.
And this will probably not look that realistic.
It’s better to upload your own, I think.
But for the sake of this tutorial, just knowing how to do this.
Hold this.
And now I could prompt.
You’ve seen me prompt in the previous ones.
I’m using deep seek I could prompt for camera movement, zoom in far away shot pan left, pan right,
whatever it is, but I’m just going to leave it.
For the sake of this example, let’s hit generate that.
Once again, you want to make sure you’re doing video generation.
You can do exactly the same thing here with image generation.
If I click that click image and then I can upload this and have that.
But let’s just keep it for the video.
It’s video oh one because we’re in oh one right here.
And then professional I want it for five seconds.
You could have more.
It depends on the plan which I’ve talked about plans before.
Uh 69.
Yes.
Outputs.
I just want one.
Let’s generate that and let’s finish generating.
Let’s play it through here.
Yes.
Devin looks like me.
Definitely holding.
That has done a bad job in making it look slightly realistic, slightly animated style holding it,
but it has matched the background and myself definitely looks like me.
Lighting on the face as I lift it.
I didn’t prompt for camera movement.
I didn’t prompt for him being angry, sad, happy, anything like that.
They’re all extra details that you could be adding in here.
Now you’ll see me in the future lecture doing stuff like, uh, faces and lip sync and stuff like that.
So they’re coming up in the next one.
But this was cling 01I wanted to show you, which is really great, to be able to add all this in your
elements.
So now you could be creating a whole story with this character and just keep using yourself in that
outfit.
Or if you’ve generated this inside images earlier, keep using that character for consistency and the
scenes for consistency if you wanted to, and objects and adding them all in one inside cling.
Oh, one really nice tool.
Lots and lots of fun.
Okay, I hope you enjoyed that update.
Let’s get on and learn a little bit more with cling and creating video.
— Kling 2.6 – Advanced Audio & Singing (Update Lecture) —
Now I’m going to add this lecture here in cling.
You’ll see me in the future lectures be using cling 2.5.
Here’s 2.6 an update.
They will keep updating, so keep adding.
Uh, keep checking this drop down up here.
There’s no difference when you are generating these.
For example we use deep seek we prompt here.
Our settings are below so there’s no difference there.
Keep the next lectures using 2.5 and all the extras for adding in elements.
Prompting everything else is the same.
But 2.6 has advanced audio.
Let me show you this.
Use quotation marks for speaking singing content.
For example, the character sings look at the stars best of English or Chinese Mandarin.
Uh.
Click to view the user guides.
You can click here and you can view more if you want to.
Uh, it’s got a whole drop down right there to go through this.
I do not want that in Chinese.
Simplified in English.
Yes.
And everything’s on here to explain.
But I’ll explain to you right here so I can do start end frame image to video or text to video.
Keep it on 2.6.
Six.
Uh, let’s actually do.
Okay.
Let’s do this.
Okay.
A man, white, aged 70, outside snowy, seeing he is wearing a coat, scarf, hat.
It is Christmas, he sings.
We wish you a merry Christmas, for example.
Now I’m going to use deep seek just to make that prompt better for cling.
Specifically, let’s look at what it’s got right here.
Close up, soft light.
A seven year old white man wearing a coat, scarf and hat strides in a snowy outdoor scene, singing
We Wish You a merry Christmas.
Positioned at a higher angle with a shallow depth of field.
And then there’s difference.
Like a long shot here.
Front view.
I quite like that.
So I can click to either use the prompt.
Let’s use the prompt because I’m going to check my settings.
Or you could click generate.
Here I want to make sure all my settings here five seconds 69 one output.
All perfect.
Great.
Let’s have a look and see what happens there.
We wish you a merry Christmas.
Amazing.
So you’ve seen the videos online where people are doing singing content either with animals, with people
and stuff.
This looks super, super realistic.
You could also do it with image to video, add your own image.
You could have you singing or someone else you have permission for and that looks like even the wrinkles
in his face and the sound was good.
Now you don’t have to do this with singing.
You could just have speech.
You could just have whatever you want.
And the lip sync is already here.
Inside killing with 2.6 and singing and great results.
Really really nice all in one place.
The 2.6 update is really nice for this and competing with other other video generators, being able
to have this singing speech all in one, very realistic to be able to put out there wherever you need
it for.
So you could actually use this for either standalone social media videos that could be funny, or you
could also use this inside your scenes.
You could have shot reverse shot.
Keep having your images.
Either we’ve generated earlier with image generator and have them talking to one another and build up
a whole scene.
Really nice update from Klingon 2.6 for the audio there, and the rest with all the other things you
want to do here with lip sync and everything else all coming up, you can see that inside the next lectures
on video where I’m using 2.5.
Okay, I’ll see you on another lecture.
— Kling: Text to Video with Kling + Lip sync/AI Sounds/Elements & More —
1
Now over the next few lectures, I’m going to be talking about Kling and video. We’ve
2
got text to video, image to video, references, all stuff like this. And also later I’ve got
3
sound effects in the sound effects section and about avatar later too using inside Kling.
4
But if you want to know how to access Kling and all the different plans and stuff, go
5
back to the image section, section nine of the course, where I go through how to access
6
Kling, the layout of Kling, where everything is so you can get all that information there.
7
And now we’re just going to jump straight into making video with text to video here in Kling.
8
Now Kling video, I think this is where this platform excels. Really nice outputs. Previously
9
we were using Kling with image and we were doing text to image, image reference and restyle.
10
And now right here, if I come down is AI video generator, or if you were on here in
11
the explore page, you could of course just come down to video right here and you’ll come
12
to the same page. Now you’ve got much like the image section, you’ve got text to video,
13
image to video and multi elements. We’ll go through these one at a time. Text to video,
14
really nice tool for this. I think as some of the best generations, probably the one
15
you’re looking at using the most. And this has prompt just like image. And it also has
16
the advanced settings right here if I want to add sound and music. So let’s start with
17
prompting and we will prompt for an image and then we’ll go through these in the next
18
lectures and I’ll show you what you can do after you’ve generated your video. There’s
19
some extra things we can do here. So much like I showed you in the text to image, I
20
have an ideal prompt here, but you should also be using deep seek inside here, I think
21
to generate that even better. And it comes to five points. So the first one was an overview
22
and style. I would take a note of this or just try to remember this if you want to subject
23
and costume. This way I never miss anything, setting and conditions. Best to keep this
24
and just paste them in every time rather than type in action of subject. And then last one
25
is camera movement. Previously an image that would be more like camera angle, but we’ve
26
got camera movement in here can also be angle on here. So let’s do a similar prompt we did
27
for image when remember we generated the image of the guy in New York like this. So let’s
28
do overview and style. So I’m going to say as an overview, a man walks down the sidewalk
29
in Times Square. And then I’m going to say realistic realism. I like to just say that
30
twice sometimes. Okay, let me correct the prompt there. Okay, that’s the first bit right
31
there. So subject and costume. This time, let’s do a female aged 30 wearing a brown
32
leather jacket with a fur collar. I could go into more details, there’s stuff like blonde,
33
short hair, red lipstick. What’s her ethnicity? She’s white. Now if I wanted to, I need to
34
go into more detail right here. She’s wearing a scarf, she’s got hands in her pockets, things
35
like this. Setting and conditions. I’m going to say it’s Times Square, New York. And the
36
conditions I’m going to say it’s daytime, summer, sunny, action of the subject. So let’s
37
say she walks confidently and looks up at the buildings. Now the camera movement, I’m
38
going to say I’m going to do something a little bit advanced here. Let’s go. Camera follows
39
back as she walks. And then the camera moves around her all the way to the back of her.
40
So that’s quite conversational. That’s not really instructional for a camera, but you
41
get everything that I want here. So then I would use deep seek just to make sure that’s
42
optimized for the platform for generating what it is that I want. So let me read the
43
first one here. A 30-year-old woman with short blonde hair, red lipstick, wears a brown
44
leather jacket with a fur collar, walking confidently down a sunny Times Square sidewalk.
45
The camera follows her movements from behind, panging around to capture the landscape’s
46
digital billboards. So I don’t want to actually have it from behind. I’m going to actually
47
just manually change this. Unless one of these others, the handheld camera circles around
48
as she gazes upwards. That’s actually what I want. So let’s run with that. So always
49
do read your prompts fully before you go through. I can also have sound effects here
50
and music. Let’s go sound effects. Let’s go Sirens and Busy New York Street. I don’t want
51
any music, but you could be saying stuff like dramatic music or something like that if you
52
want to set the scene. So Sirens and Busy New York Street. Oh, sounds. Do I want how
53
many outputs? One output in 16.9 style. Yeah, let’s keep it like that. Just five seconds
54
is fine. And let’s generate this. And that’s finished generating. I can already see the
55
woman is exactly as I prompted for us on the sidewalk. Yes. In Times Square, I can see
56
some lens flare even here. Really nice. Okay. The sirens are a little bit intense, but they’re
57
definitely there. She walks. The camera didn’t move around her. So let me reprompt and try
58
and get exactly what I want right here. So fair-skinned woman in a 30 strides through
59
Times Square collared. The handheld camera circles around. I’m just going to say the
60
camera starts in front of her and then circles around her to the back as she gazes upwards
61
at towering buildings, capturing summer sunlight reflecting on crowd sidewalks. I’m just going
62
to leave this bit off, not give it too much information. And let’s generate that and see
63
if we can get that result that we wanted. Now, I’m going to scroll down and actually
64
see that I’ve played with something similar before. I think I put a young guy here inside
65
Times Square. Let’s have a little look. Here’s him walking from behind this one. He walks
66
forwards. And I think this one moves around him. Yeah, nice. So I can see the prompt I
67
was using for that. The camera follows backwards as he walks. He’s looking up the building.
68
The camera moves around him fully to the back of him. So let’s see if we can regenerate
69
something similar with this woman in the prompt we’ve got going here. Now, all of these examples,
70
you’ve probably seen similar things before. I actually used when I was trying to think
71
of ideas, I would do something like sailboat and it would prompt like this. And I’ve got
72
lots of different videos. I was testing different things for text to image. There’s a panda
73
playing guitar. There is a sailboat. Let me just turn this down slightly. There is this
74
object in a scientific experiment type futuristic city. You can see above here, cyberpunk urban
75
space. Here’s the guys on the moon. There’s a volcano inside a busy city about to erupt.
76
That’s a really nice image. I mean, not nice in a pleasant way, but I mean, a really well
77
constructed image. Here’s a woman walking her pet crocodile or alligator down the sidewalk
78
in Beverly Hills. Here’s some Scottish guys in kilts as it pans to see the highlands behind
79
them. And here’s an anime style walking through. Really nice. You can see you can get all different
80
styles and everything here through Kling. It’s a really good, really good tool to be
81
using for this really beautiful results. Now, let’s see if we can get that camera style,
82
which is often the thing when people are generating video that they struggle with. I get a lot
83
of people commenting to me. I just can’t get the camera to move up, down and around.
84
Well, Kling is actually quite responsive to camera movements. Let’s stroll and have a
85
look at this. Yeah, the camera moves around to a back. It almost stays still, but that’s
86
a really nice, really nice image right here. Really good. OK, let me go back to my original
87
prompt to that guy and see if I can mimic almost exactly what it is that happens. Let’s
88
use that prompt. I’ve copied it. Now I can paste it in here. A white female aged 30 wearing
89
a brown leather jacket with fur collar walking through Times Square. Sunny winter day. We
90
see from the front, the camera follows back as she walks. She’s looking up at the buildings.
91
The camera moves around her fully to the back of her. OK, let’s run that prompt right
92
there. OK, here’s the result. Oh, I didn’t prompt this time for blonde hair. Never mind.
93
Let’s see if we can get the result they wanted. Follows back and then moves around really
94
nice as she walks. Look at the lens flare in there, too. And I’ve still got the siren
95
sounds that as opposed to this. The camera kind of stays still and then pans around this
96
one. The camera moves around backwards and then moves around. It’s really responsive
97
cling to camera movements. I think that’s why it’s such a popular tool, even compared
98
to other video tools. It’s very responsive to camera movement, which I think is what
99
sets it apart. Really, really nice. Now, after after you do this, you have some options right
100
here. Lip sync, multi elements, AI sound. So let’s go through these lip sync. You’d
101
have seen me do this slightly in the other lecture where I talked about when we did image
102
and we did reference. Let me just do a text to speech here. You could also upload something
103
if you wanted to, which is probably a much better option than text to speech. Text to
104
speech is never really optimized. It always sounds slightly robotic, but you could upload
105
your own one here if you did a recording and change voice, perhaps using another tool like
106
11 Labs or something. So text to speech. Let’s go a female and I’m going to say New
107
York. Here we come. Exclamation mark. Let’s have a look and see Ashley of New York. Here
108
we come. New York. Here we come. OK, nice. Let me just put a little bit of punctuation
109
in here. New York. Here we come. Let’s play. Christina, New York. Here we come. Emma, New
110
York. Here we come. OK, let’s just add that speech right here at the beginning when she’s
111
still walking backwards. Let’s add speech and generate that and that’s going to be loading
112
right here. Whilst we’re waiting for that, let me look at multi elements. This is really
113
kind of exciting, actually. So you can swap, add or delete things inside your video. This
114
is actually kind of understated. I’m playing it down. This is an incredible thing right
115
here. So I can add something if I wanted to take something away or I can add an element
116
in here. Now, I could take anything like if she was holding something, I could replace
117
that on her hand. Let’s let’s change. I don’t know. Let’s click up here, for example, and
118
her hair right there. Preview the whole selected area. So if I click to preview that, it’s going
119
to start. It’s going to start selecting the area and you’ll see it scroll through the
120
entire scrub through the entire thing just to make sure it’s selected. So, yes, selected.
121
Yes, selected. Yes, selected. Lovely. Really good. Let’s confirm that. Now, what do you
122
want to do with this? I can upload an image, for example, I could add someone’s hair, a
123
hat or whatever it was, or I can prompt for it. So I could say swap the hair or swap the
124
brown hair from the image for let’s go blonde hair. OK, for this video. Really nice that
125
we can do this because I forgot to prompt if you remember earlier. So let’s generate
126
that now. In the meantime, here we go. The lip syncing is finished. Let’s take a look
127
at this New York. Here we come. Never spot on lip syncing is something that I probably
128
has to come on a lot with inside native tools when you’re text prompting. It’s better. But
129
if you’re adding lip syncing in post, it’s never great. But she is moving like that.
130
She was just looking at the screen. You saw me do it earlier when we were talking about
131
this right here. And welcome to the Kling A1 video course. It’s better if someone is
132
facing you than the moving. But it is an option and it is there. So I wanted to show you now.
133
Let’s wait for this final one where I’m swapping images out here and I’ll show you another
134
one. Actually, let me show you one where I can see a hand. So if I take this guy right
135
here, maybe this one is slightly better. OK, let’s do that. And let’s go to multi elements
136
on this image. OK, so it selected that guy right here. Let me just select his hand. If
137
I click on that, targeted it. OK, great. Preview the selection. Let’s make sure it’s taking
138
his hand the whole time. Great. That is selected his hand the whole way through. I can see
139
right there. Let’s confirm that. Now, I could prompt here holding an axe or whatever it
140
is that I wanted. Let me just add an image right here of an axe. If I drop that in and
141
is analyzing. Yes, that’s the image that I want. OK, confirm that. OK, swap hand from
142
the image for want to swap the hand from the video. If I just click that button, I can
143
select right here. Swap the hand from this video for axe from. Let’s do that at symbol
144
this image. OK, and let’s see if we can make him carry an axe. That’s quite a big arse.
145
Let’s let’s run with that. I don’t know what size is going to do or anything like that.
146
Let’s go back to our previous generation that we were waiting on right here. Let’s go for
147
this one. OK, so this was swapping the dark hair for blonde hair and it done really well.
148
Even bounces. It has a little bit of a little bit of a what’s the word I’m looking for?
149
Morphing style right there. You can see I was able to swap that out perfectly, which
150
is really nice. A really great tool. Now this is going to be a lot more difficult.
151
I’m swapping the guy’s hand to hold an axe. I don’t know if it pushed it too far, but
152
let’s test cling to the limits so I can show you here in this course. OK, that has generated
153
right here. Let’s see what it does from the start right there. OK, he’s definitely holding
154
an axe in his hand. Yeah, that actually worked. It’s not super clear, but obviously the the
155
hand is at the bottom of the screen right here, but he’s definitely holding an axe.
156
So that did work. I can still see his hand. Is he gripping it fully? It would have been
157
a lot better if we had text prompted for him holding an axe and described the axe within
158
there. But if you need to swap something out from a scene, you already have an existing
159
scene or something. Then, of course, you can do that. I wanted to show you that, of course.
160
So let’s also as other things here, I can extend this if I wanted to, and I can prompt
161
to extend this to go another five seconds. For example, man stops and looks up the building
162
and the camera moves to a close up of his face so we can just prompt and we can extend
163
the scene 35 credits to generate that. Let’s generate. That’s obviously handy because if
164
I generated five seconds, got what I wanted. Now I want an extra five seconds or 10 seconds.
165
And that’s a way to get even more time than that’s obviously very handy. And lots of you
166
will want to know that I can also do if I’m here and I want to change the AI sounds on
167
this. OK, so what sound do you want on here? I’ve got footsteps echo on the pavement. Camera
168
shutter snapped soft urban sounds. So let’s change this for soft footsteps echo on the
169
pavement. People screaming in the background, fear, panic. Let’s get that guys walk with
170
an axe. I want the sounds of screaming in the background. OK, let’s generate. OK, here
171
is the generation for AI sounds. That’s a lot quicker than generating an extended clip.
172
So let’s play this first. Let me turn this up. I don’t hear anyone screaming. Here’s
173
my four options, by the way, generated for if we remember for outputs right here. OK,
174
first one, second one, third one, fourth one. I’ve got a little bit of some people
175
talking there at the last moment on that, but definitely no screams of panic. Let me
176
regenerate this again. Let me take out the footstep sound and let me just do dramatic
177
drama music in the background on here. I could use deep seek here, which I didn’t do before.
178
And I can say, OK, distant panic screams. Use that prompt. And for the music, keep
179
that as it is. Drama, dramatic music. OK, let’s generate that. Still got four outputs.
180
And let’s see what the four outputs bring us here. Still waiting for that extended shot.
181
It’s got five more minutes. It said nine minutes for that. So that’s quite a long to extend
182
the shot. But that’s how long it takes. It could be quick on different plans or depending
183
on how much the server is being used also. So let’s wait for this to generate. OK, let’s
184
finish. Let’s have a listen to this. I’ve definitely got that dramatic music. This is
185
the next one. Same there. Third one. Yeah, for sure. I really like the ominous music.
186
You can tell that if I was making this like a horror movie, this guy’s about to break
187
into someone’s house with an axe. I didn’t get distant panic screams. Actually, maybe
188
I did a tiny bit on this one. Yeah, there. But it kind of ignores the AI sounds just
189
slightly. You might want to add them in post or reprompt and reprompt. But the dramatic
190
drama music, it definitely got that. That’s really nice. Really adding a kind of an ambient
191
scene to ambience to your scene. Really nice. All right. So the last thing I wanted to do
192
was just wait for this to generate when I extended a shot. And I’ve pretty much shown
193
you everything here inside my video section here on the text video. So we’ve done prompting,
194
gone through how to get the perfect one of that and all your settings you need for that.
195
Five seconds or 10 seconds. Obviously, like I showed you in the earlier lectures, there’s
196
50 generate that or 25 for five seconds. And the number of outputs you’ve got and the orientation
197
of your shot. And then once we generated our video, we’ve gone through everything for multi
198
elements. We added an axe in here. You could read up this. This one is a lip sync. So you
199
can add lip sync to this and AI sounds where we saw I’ve just added the dramatic music
200
in here. So the last thing was, if I wanted to extend the shot, that’s the last thing
201
I want to show you here. Here are other things like if you want to download this, we were
202
at watermark or you could just download the audio, anything here. Or if I wanted to delete
203
this report it or publish it, that’s here or star it to find it later. So that’s where
204
you download it. And we’ve covered everything here that I want to cover inside this first
205
one on text to video. Next lectures will be image and then multi elements. So let’s wait
206
for this extended shot to finish. Okay, that’s finished. That’s finished generating there.
207
We had a five second shot. Now it’s 11. So the man walks down. And then once again, I
208
asked the man to stop like he does look up at the building. And we go into a close up
209
of his face. That’s actually a really nice shot. How ominous that would be if I’m like
210
this man is about to commit a crime with this axe and go into a building which building
211
he’s looking at it close up like that. And then I’ll cut to a shot of the building. Even
212
the Empire State Building looks like in the background here. Really, really nice. That
213
was a great extension. And once again, the main thing with clean, I think, compared to
214
some other AI models, is that it’s very responsive to camera movement. I know that if I’m in
215
other tools, and I say stops, and then goes into a close up, it would happen maybe after
216
several prompts, or it could happen straight away. But here in clean, first time, every
217
time that I’ve prompted for a movement like that, I’ve got movement from my camera, which
218
is really nice. That’s how you get these really dramatic, really realistic shots. That
219
looks really nice. Even the beards double patchy like it would be on people like realism.
220
Really good shot. And the quality is outstanding, isn’t it? Look how realistic this looks. He
221
walks, even the camera movement realistic. Other people in the background, no one’s walking
222
backwards, no morphing, and really good shot. Very impressed with clean. It’s a really good
223
model. So that was image to video here. Sorry, text to video. Let’s talk about now image
224
to video. If you want to change an image that you have, start End Frame, and turn that into
225
video. Let’s talk about that in the next lecture.
— Kling: Image to Video with Kling (Frames and Elements) —
1
Now the next step still remaining inside video generation here, we did text to video, now image to
2
video, this is what it sounds like, although there are both frames and elements, I’ll go through
3
them both here, very interesting tool, especially elements.
4
So this is pretty much what it sounds like, I can drop in an image and then I can prompt for it,
5
much like text to video but we’re creating it with text prompt, I’m giving it either the start or the end frame.
6
So let’s do that okay, let me just grab an image of me, here’s actually me inside Times Square, we
7
keep generating tools inside Times Square here, let’s actually take me inside Times Square, let me
8
show you this image I’m going to do here, this image right there of me inside Times Square, very
9
cold winter’s day, I can either choose the start image or the end image for that, so meaning it can
10
start on that or it can end with that.
11
So let’s start with this image and then I’m just going to say man walks, I could give it a lot more
12
prompting than that obviously I could say and the camera moves and it moves in and zooms in like
13
we’ve been doing, but let’s just generate this and see what happens.
14
Now you don’t have to select what the orientation for this is, i.e.16, 9, 9, 16, because it’s taking
15
what’s already there, which is 9, 16, much like a short, I could of course be going to image and I
16
could be extending this and make it wider, just like we’ve done before if I want to do that.
17
So let’s just do man walks, 5 seconds is fine, 1 output, let’s generate that and see what happens.
18
And while we’ve had generate, if I want to, I could actually have the end frame, I can’t do it
19
currently in 2.5, by the time you’re watching this, maybe you can, if I just drop down to 2.1, I can
20
go select this and I can upload or now it’s selected, just drag that in there and I’m going to say
21
it ends on that shot.
22
OK, so now let’s have it where man walks, but I’m going to end on this shot. Let’s go. Man walks and
23
camera moves away and has to end on that shot. So I’ve got an away. So maybe it’ll be closer or next
24
to me and move in. Let’s see what this does. Let’s generate that.
25
And just so I don’t forget, I’m going to actually move this back to 2.5. If I delete this first,
26
let’s put this back on 2.5 so I don’t forget. You can also see the quality difference here. We’re
27
going to see between 2.5 turbo and also 1.6 right here. Professional mode in both start and frame.
28
So let’s take a look at that.
29
OK, that first one is generated. Let me just click that to its full screen. I turn it down a bit.
30
You can hear the sirens going in the background. Really nice. I still have selected from the
31
previous one sound sirens busy New York Street. You could, of course, remove that. The thing with
32
Kling is it keeps your previous generations and prompting in there. So you have to be aware to go
33
and remove those. There’s me walking. Looks extremely realistic. Let me just quickly make that full
34
screen here. Really nice. That’s me. It’s kept the face really well.
35
Got people moving behind. No morphing. Camera moves back as I walk. Super realistic. Really nice.
36
Kling has a really good realism. It’s even kept the text really good here that doesn’t morph, merge
37
or move. And it’s generated here with the lamp post. Knows the end of this here. Really nice. Really
38
good quality image.
39
And the next one where I’m ending on the screen, this takes longer because it needs to generate
40
something at the beginning to end on. This says about 10 minutes here. So let me go and talk to you
41
about the other part. I want to show you elements. Now, elements is really interesting. I can upload
42
multiple elements and put them inside a scene, which is really nice. Once again, the prompt is here.
43
So let’s move that away. OK, so if I upload an element, let’s say let’s upload me right here.
44
Just analyzing the image. Yep. Let’s select my face. OK, just want to put in my face here. Maybe the
45
subject actually moving that. Yeah, let’s keep me in the same clothing. OK, so let’s select subject
46
and let’s go confirm. If you want to, you could do manual and you could draw around this, but it
47
does a really good job selecting all of me. So why would I do that? Hit confirm. So I’ve told it who my subject is.
48
OK, let’s take my location. So let’s take Times Square to keep on theme here and I’m going to drop
49
me in here. OK, this is the scene I want. Perfect. Let’s just click auto. And now maybe I want to
50
have an object in here. Let me drop in this image of a map right here. Great. Let’s go. I’m going to
51
say subject right here and it’s going to select the map. Perfect. Let’s hit confirm right there. So
52
I’ve got three elements. I could add another one, but I don’t need it.
53
Now I’m going to prompt for this. OK, so I’m going to say man at. Now I’m going to say man subject
54
in Times Square holding map. He is walking down the street looking around lost. So I’ve done this
55
here. Subject, subject, auto reference right there. Let me just click down here. Sound effects. Do I
56
want to leave this as it is? Five seconds. I want it in 69. One output. Let’s generate.
57
So that’s finished generating. Let’s see. This first one was where I wanted to end on that shot of
58
me. Remember, so it starts me further back looking around really realistic, really realistic. I’m
59
looking that way. I walk forward and then it ends on the original shot right there. Exactly what we
60
asked for. Now, remember, this one had me wearing this looks like me in this shot. Exactly. Walking
61
down the road right here. Holding this map. Let’s see how well that did.
62
So it’s got me. Definitely me wearing that clothing, walking through the street. That exact scene
63
I’ve got on there and I’m holding a map. Perfect. It’s added in some more traffic right there and
64
things and movement. Really realistic. Exactly what we asked for. All three elements.
65
So this is really exciting when you want to create stories in the scenes with certain characters,
66
objects, people for consistency or bringing all things together yourself. Anyone else you could
67
create your own characters on a green screen and objects and put them into places like this. Really
68
good. So that was image to video. Let’s talk next about multi elements, which is quite similar. And
69
we touched on this earlier, but something you might want to do is a standalone feature. Let’s talk
70
about that in the next lecture.
71
.
— Kling Video: Multi-Elements with Video (Add, Swap and Delete) —
1
Now the last element I want to touch on now we’ve done text-to-video and then we’ve done
2
image-to-video right here inside Kling.
3
The last thing I want to touch on is elements, multi-elements.
4
So we’ve done this before, remember when I was doing text-to-video and I showed you it was an
5
inbuilt feature I could add inside here with elements.
6
If I go to a video let me show you I can add multi-elements right here but it’s also a standalone feature right there.
7
So let me show you with another example, let’s delete these that we previously did and I can just
8
either swap, add or delete.
9
So let’s do all of these okay, let me find a video let’s say this video right here which is of a
10
panda playing a guitar.
11
So if I add the selection let me just select this right here let’s select the guitar so it’s done
12
that and selected here by add selection.
13
So let me just preview the whole video to make sure it’s highlighted in green all the way through.
14
Yeah plays it and it’s highlighted in green all the way along didn’t go over his fingers perfect brilliant.
15
Okay let’s hit confirm I want that and now I also want to add in the banjo right here I’m going to
16
swap out this guitar for a banjo.
17
Yep that’s it there so let’s select that right there the subject yes selected all in green perfect let’s confirm.
18
And then once again just like we did earlier we’re going to prompt this with the selections that are chosen.
19
So if I go swap and let’s do this swap the guitar swap the guitar from the video.
20
So let’s go from at video let’s remove this for the banjo at and let’s stitch image one so swap the
21
guitar from this video for the banjo in this video.
22
So five seconds at one output yet and generate now we can wait for that to generate for a few
23
minutes let me go up and show you some more right here so I can add something right here.
24
If I’ve got my video right there let’s actually just delete that let’s confirm let’s delete let’s
25
get that video we might as well do it again let’s click that video.
26
And this time I want to add something to it so I could add me in there if I wanted to although the
27
lighting will be a little bit different I think but I could add me so let’s add me into this scene.
28
Let’s add the subject yet all of me confirm that nice and then that’s the video so using the context
29
of our video seamlessly add a man sat watching from this image right here and let’s generate that.
30
Now delete in the same way let me just select right here if I select this there those trees and I
31
could select these trees right here okay and then I could full preview but I know it’s going to
32
select them all the way through no problem confirm.
33
Okay delete trees from the video XYZ replace with brass hills and let’s generate that okay waiting
34
for these to generate that was me swapping that out this is me adding remember add right here I’ve
35
added me sat with this guy with this panda playing the guitar nice that is me exactly very realistic me.
36
None of the lighting or if I would have put me further across I can re-prompt for that but I’m in
37
the scene with this panda playing guitar 100% that’s really really good really nice okay let’s wait
38
for these others to finish generating.
39
Now that’s finished generating that one here this is where I was swapping I was on the swap method
40
here and I swapped this guitar if you remember with this banjo and here it is playing it is
41
definitely swapped out the guitar for the banjo even look at the fingers you saw the claws that
42
lifted there really nice really good swap that out.
43
Cling did a really good job at that really nice even the lighting looks very similar to how it
44
should be and the angle is completely changed here’s the base on that side where the pose was
45
swapped around there did a really intelligent really good job doing that and the last one to
46
generate was me getting rid of in the delete section getting rid of trees in the background.
47
Now this didn’t work so well did it maybe because the whole scene is trees there are so many trees
48
we could it didn’t delete any of them at all so let me just re-prompt and let’s try with that again
49
let me select this and re-select what to delete let’s try and delete something else here this time
50
I’m going to select the guitar and let’s see how well we do with that the whole guitar okay and
51
confirm I’m going to delete that.
52
So now I say delete guitar from the video I don’t know what’s going to happen is just going to have
53
nothing playing there at all that’s a bit of big — it’s going to make up a background and
54
everything here but let’s put it cling to the test let’s see how well it does with the delete tool.
55
Now that’s finished generating already see it’s done a really good job well done cling so he’s still
56
going to do the same movements but without a guitar he’s now playing the air guitar that did a
57
really good job removing that struggled with the background although let me compare that so the
58
trees here the trees there it kind of put in different trees didn’t it but that’s really good that
59
it managed to delete that so if you do need to delete something from a video it could be something
60
very minor this could be really useful if you’re generating things and there’s something in the background.
61
There’s something there that shouldn’t be or from your own video even upload your own video you need
62
to remove I don’t know a license plate or something like that really really good so that was all the
63
video here inside cling text the video and all the different features image the video multi frame
64
upcoming lectures I’m going to talk in cling about avatar and also sound effects but that was
65
everything for video a really impressive realistic tool really nice let me know what you think.
66
And I’ll see you in a future lecture on this shortly.
— Kling: AI Avatar (AI Presenter) —
1
Now inside Kling, let’s talk about avatars. A lot of people are interested in this. First location.
2
If you’re here on the explore page, the main page here, I could scroll down right here and go to all
3
tools and right here is avatar. You come to a page like this. If you’re already making something,
4
perhaps you’re in video already, then you can scroll down to here avatar and you come to the same page.
5
Now, you’ve got basic options right here, build avatar or lip sync. If you already have a video,
6
perhaps you’ve created or one of yourself. Let’s go to build avatar. I think that’s what most people
7
are interested in here. Now, let’s do a few things at a time. Avatar library, close it, open it.
8
These are all the avatars I could be choosing from. I could see maybe I want this cat as my avatar.
9
Think of this as a speaking person.
10
Speaking person to explain. People use these for explainer videos. I’ve actually seen it on YouTube.
11
People doing it for explaining AI models and tools and things. Having a person explain it that’s not
12
themselves or for ads. They’re great for adverts and promotions and things like that. Really good.
13
So I could just choose someone. Let’s choose corporate training right here and then I can choose the
14
speech. So let’s first do text to speech. Let’s say hello and welcome to the course.
15
Great to have you here. Okay, let me just correct that. Calculating it, 32 credits for that. Let’s
16
run with that as an example. Now, that’s generating right here. Let’s also go back to AI library so
17
I could be choosing someone or I could do AI image. I could create someone. So let’s go for a guy,
18
elderly. Let’s say this is a skin tone. I could describe it all right here. I could just do this and
19
it could say customer service school teacher podcast.
20
So a candid podcast star recording scene of a young African-American man. Let’s make it an elderly.
21
So let’s do elderly African-American man sitting at a clean white desk. He wears a plain white
22
T-shirt and large over ear headphones. He’s short, curly hair, well-groomed, beard framing, warm,
23
genuine, smart. In front of him is a sleek modern laptop, blah, blah, blah. All really good. Now I
24
want it in 16.9 so I can now generate that for four credits. Let’s generate this.
25
Look, I like these all really good. Actually, let’s choose this one. I’m going to use this guy here
26
as our avatar. Let’s use image. It’s just analyzing and making it here. And this time, for this
27
example, let’s not add speech here. Let’s upload our own audio. I have my own audio to upload here.
28
So I just clicked upload, analyzing the audio that I’ve chosen. This could be your own audio you’ve
29
spoken or perhaps using another piece of software or perhaps it’s an audio track from somewhere else.
30
Make sure you have permission for it. I’ve seen that people obviously use like a baby and then put
31
like Donald Trump speaking just for my advice. Make sure you’ve got permission to be using it unless
32
it’s for your own personal use, not to be displayed anywhere. So now I’ve got this avatar person
33
I’ve created. I’ve got my audio right there and let me just remove this right here. I don’t need an
34
avatar prompt and let’s hit generate.
35
OK, let’s go back. So this is our generation. This was doing the audio for the avatar that we chose
36
of this woman right here. Reference image. This one right there. And we have this analyzing. So
37
let’s wait for these and then we’ll come back and I’ll show you the results.
38
OK, that finished generating where we had this person speaking. Hello, welcome to the course. I
39
think I said for it. Let’s go back. Hello and welcome to the course. It’s great to have you here.
40
OK, really nice. That sounds really good. The lip sync is really good. Let me make this bigger a
41
second. Let’s have a look to the course. It’s great to have you here.
42
Hello and welcome to the course. It’s great to have you here. Really realistic. Really good. So you
43
can then obviously edit this any way that you want to. You can modify, edit, for example, if I
44
wanted to change this speech or anything, I could do that. But how good is that? No longer are we
45
going to have people presenting these kind of videos. We’re going to have AI. If not already, I’ve
46
already seen plenty of adverts come my way advertising products, advertising streams, advertising whatever.
47
Not using real people because the expense of hiring actors and such like. I think this is the future
48
of avatar explainer videos and things like that. So let’s go back. The only other thing we have
49
right here is our guy that we created and my own audio added that on. I can actually play you my own
50
audio so you know what we can expect right here. Let me just grab that for you.
51
This is just a test example. Here is a voice to add. This is just a test example. Here is a voice to
52
add. So that’s the voice that I had that we are adding over the top here. Let’s wait for that to
53
generate and take a look. And that’s finished now. Let’s take a look. So we’ve got our guy and my uploaded audio.
54
This is just a test example. Here is a voice to add. This is just a test example. Here is a voice to add.
55
This is just a test example.
— Kling: Motion Control —
1
Now, Kling has a really nice tool called motion control, which a lot of people have been asking
2
me about, and I’m going to show you right here. So once again, inside Kling, if I just
3
go to video generation, now, if I just go over to motion control here, you can see actually
4
at 3.0 is available lectures on that, and motion control available in 2.6 right now.
5
But by the time you’re watching this, it may be available in 3.0. Not that it’ll make much
6
difference to the way this works or use, of course. Now, what is motion control? Motion
7
control is really interesting and useful for you making your videos because you’re able
8
to upload right here a video of action. That’s someone turning, someone talking, someone
9
waving, whatever the action you want it to be, and then upload a character, your character
10
in your movie you’re making, for example, and they can mimic that of what they’re saying
11
and doing in this scene. So you can imagine you can use it where you could just record
12
a quick video of yourself on your phone doing whatever action you want because using text
13
or image to video didn’t quite get the action you want. You can quickly record a video of
14
yourself doing it and then replace it with your character that you want for your movie.
15
Amazing way to get exactly the movement that you want from a character with AI video. So
16
the first thing you have to do, add the video of the character actions to mimic. So this
17
is the video you are uploading right here. Now, what I’m going to do is I’m actually
18
going to do two videos. I’ll show you right here. So the first one, I’m going to go with
19
this motion right here. This woman wagging her finger. I’ve chosen that because finger
20
and hand movement sometimes can get lost. So it’s quite a good challenge. A simple movement
21
like this. And the next one, the challenge I’ve got this man turned. You see, he turns
22
his head and he puts his thumbs up. So that’s the action I want to have because sometimes
23
turning the head can be a bit of a challenge. So the images I’m going to replace these with.
24
Let’s go with this animated koala right here of a chain and an Australian flag. So let’s
25
replace them with both of those videos and see if it came up well. And let’s also do
26
the same thing with this image. I have a guy here sat in the desert and he got his back
27
slightly turned to us. So let’s see how this stacks up. Then I’m going to show you something
28
exciting at the end where you can actually change yourself a clip of me. So I’ll change
29
this clip of me and change me into a female. So the first thing you want to do, add the
30
video of the character actions you want to mimic. So let’s grab that first video of the
31
woman doing the thumbs up. I’m just going to drop her in here and then add the character
32
I want. First, let’s go with the animation, shall we? The not ultra realism. I like to
33
test these on both animation and realism, and you can see the difference. So the two
34
options I have character orientation matches the video or the character orientation matches
35
the video. This is important because it’s less so on this one. But if I was to example,
36
let me put in the other image right here. If I was to have this person, which their
37
back are slightly to us from the side, you’re basically telling the action to happen, but
38
keep the person facing the side or keep the person facing forward. So in this case, they’re
39
both almost identical in composition. So it doesn’t really matter. The results are slightly
40
better in carrot in the character orientation, matching the video primarily, I find. Now
41
we can add prompts in here. You don’t need to if it’s a very simple thing like this.
42
If you wanted to add any camera movement or anything, then you can add them in right here.
43
But for this, I don’t need to. With the settings below, I can just make sure let’s actually
44
do it 1080. Shall we? I just want to have one output right here.96 credits to generate
45
this. Let’s generate and check that out. task submitted. While we are doing this, let’s
46
remove this one. And let’s actually do it with the guy who’s sat behind us slightly
47
back to us. And this time, I’m going to show you the difference where the action movement
48
is face on, but our character is slightly side on. And I want to select this right here,
49
the character orientation matches the image, you can see I get this warning up here. The
50
orientation only supports three to 10 second videos. This video annoyingly is 12 seconds.
51
So let me just crop that and add in one under 10 seconds. I just trimmed that slightly,
52
let’s upload it should probably be about seven seconds or so eight seconds. And now I’m able
53
to select this. So now with this generation, we’re going to have this action, put in the
54
handout and waving I’m interested to see because we are keeping the orientation where this
55
guy is facing this way. I’m not going to prompt him for any camera movement, I could
56
say the camera moves around him to the left to the right, etc. But let’s just see what
57
it’s like here. Once again, I’m going to keep it 1081 generation, let’s hit generate. And
58
then exactly the same way. Whilst those are generating, I’ll show you all the results
59
at the end. And we can compare these. Let’s add in our other video clip where the man
60
turns to us. And again, he’s turning slightly. So this will be interesting. I’m going to
61
add that image in where it’s pretty much matched. They’re both facing the same way
62
facing to the left to the right. Let’s keep the orientation matching to the video, they
63
won’t make any difference. And let’s hit generate. And lastly, I want to remove this and I’m
64
going to put in the animated image of the guy of a koala but this time, still going
65
to keep the character orientation match in the video. Now this will be a little bit confusing
66
for it. It’s a real test for the model. Because this man is facing slightly sideways to his
67
left, and our character is facing forward. So let’s see if it can accurately match twisting
68
the character around slightly. This will be interesting because you can have a general
69
image of your character for your movie, it might be on a green screen, it might be an
70
image you’ve created of your character for consistency. And if this works, and is good,
71
it means you can have any video where the character and the person the reference is
72
facing anyway. And you can just upload your front facing image. So let’s generate that
73
also and wait for these results. Okay, our first one has finished. So for reference,
74
if you remember, let me get these on screen. So it’s easier. We had the woman wagging her
75
finger facing us. And she doesn’t look very impressed. And we had this happy koala facing
76
forward. So it has let’s play it. It has changed the expression slightly not quite as happy
77
and the eye movements move a match and wags its finger up in exactly the same way. I’m
78
not sure that’s ultra realistic for a koala’s finger. And let me just see if the woman right
79
there she pauses, holds, and then does she move her head slightly to the left? Yeah,
80
and a bit of a half smirk. And the same thing happens here. Wags finger holds moves and
81
then the head moves slightly to their right and a bit of a smile smirk matched it exactly
82
isn’t that great. That’s fantastic. Now the next one if we remember, this was where we
83
had the woman’s action with wagging her finger and this guy facing side on and this time,
84
we chose character orientation matches the character so he’s facing side on still they
85
haven’t changed it to face front on. And I can already see there’s a bit of an issue
86
right here. If the orientation is not matching, then it’s not great. Because basically he’s
87
trying to do the wagon finger facing as if the camera was over here on the right hand
88
side somewhat. So the easy way to do that is to fix either inside here cling image or
89
a nano banana or something. If you had a image like this, then you could make them face forward.
90
If you wanted to keep them side on a best to have a video where the person was slightly
91
side on but it is matching the finger wagon ish more of a pushing forward and back than
92
a wagging. But you could use for example, this part cut there and then go face on. So
93
let’s have a look and wait for these next couple of generations. Now the next generation
94
has finished this if you remember was this man sat on a beach, and he gives the thumbs
95
up double thumbs up and a smile. And we were having our image of the man in the desert
96
right here. So if I play this, it’s copying it exactly it works perfect because because
97
they are facing the same way the initial orientation of the image and the video match
98
pretty similarly. So they are copying the action really well. It’s a really nice copy
99
of that really well done the hands look the hands look more real than the face the face
100
sometimes and get a bit plastic here fake but the hand looked really good. And that’s
101
usually the issue. Now the last one if we remember was again throwing a bit of a spanner
102
in the works for cling and seeing how they dealt with it. We had our man doing the double
103
thumbs up facing sideways but the front facing animated koala so they’re going to have to
104
move them somewhat I can see their their waist down is pretty much facing forward still but
105
they’ve twisted the body. Let’s see what this looks like. So yeah, and they’re looking over
106
slightly to the right this time he looks at camera in the video but he does do double
107
thumbs up turn does he turn back? Yeah. Okay, so it’s copying it somewhat it’s not exactly
108
it didn’t twist the koala around fully with the legs but it’s pretty close isn’t it really
109
really nice love how we can use this and a way to use is obviously inside our own movies
110
if we can’t get the exact action that we want we could upload any video that we have access
111
to to copy the action in it with our own characters which is great for when you’re text prompting
112
for video if you can’t quite get the person to turn around especially if they’re turning
113
to wave their hands in the way you want you can create your own video upload it here and
114
have them copy the action from your original character. Amazing. So the last thing I want
115
to do with this is just show you myself with speech and then also copying that so in the
116
same way I’m going to just delete these let me show you the clip of me. So here’s a clip
117
of me at my desk. Let me show you now this workflow really quick speaking to camera about
118
showing you guys an AI video a really quick clip right there so let me drag that in there
119
that’s the action that I want and I want to upload my image so the image is this it’s
120
pretty much identical I’ve used nano banana here to just swap out me with a female wearing
121
the same clothing pretty much essentially so let me drop that in right here let’s drop
122
that on and you’ll notice that this clip has audio audio is selected on here I’m also doing
123
it 1080 number one outputs yes and it’s only 40 credits to generate this so let’s generate
124
that and watch them match up okay this just started playing and it made me laugh this
125
has finished remember we’ve got the video of me and then we’ve swapped me out for a
126
female right here so let’s click and play you can also be really really quick let me
127
show you now with this workflow a really quick AI so you see the lip sync is not 100% it’s
128
pretty close pretty good and if you’re watching at this size then perhaps it wouldn’t be noticeable
129
too much you could run it again and see if there’s any better you’ll notice that of course
130
she has my voice so what you could do then is I could actually grab the audio from this
131
original clip or download and grab this audio doesn’t matter have the audio only I could
132
go into 11 labs I could then grab that audio and drop it into here now it’s got the audio
133
of me talking right there I could then choose a female voice that I wanted I could hit generate
134
speech and then it can also be really really quick let me show you now with this workflow
135
a really quick AI so now I’ve got a female voice the one that’s more suitable for that
136
video if that’s what you want the realism I can download that and then in my edit put
137
these together and sync them up so now her voice after I download this inside my editor
138
I just sync them up put them together and swap out her voice for my voice that’s the
139
little hack with 11 labs right there so this is obviously amazing really really good lots
140
of fun to play with and also to get those shots that were just difficult from text prompting
141
or image to video prompting to get exactly the movement that you want it’s going to be
142
a bit of a game changer people making mini movies and they need scenes with people acting
143
moving in certain ways going to be amazing for that so that was motion control with clang
144
I’ll see you on the next lecture
— KLING 3.0 – 15 Second Video Clips! —
1
Now, let me tell you about Kling 3.0. Yes, this is an update lecture and there are several
2
after this because Kling has released 3.0. If I come up here, I’m inside video in generating,
3
you can see there are previously 2.6 and the other versions and now we have new 3.0. So
4
I need to tell you about and we need to test how to use this and all the different features.
5
So there are a few lectures after this one showing the different features that are there
6
and also this one I’m going to show you because there is something brand new inside creating
7
video clips. They are longer, which is amazing how long you can have. I’ll show you in a
8
moment and we’re going to test the quality. So what I’m going to do is if I was inside
9
Kling here like this, I can go to video generation and it’s automatically selected for me. So
10
Kling 3.0. So I can add my start frame and end frame here and also text prompt. There’s
11
some other things here like multi shot, which I’m actually going to have a lecture on next,
12
which I can toggle on and off, allowing there to be multiple shots, which I can dictate
13
here or I can do this custom by telling one image here, multiple shots, shot, reverse
14
shot, etc. Really good. Talk about that next. And down here, when I come to generate it,
15
look at this. I can now scroll up past 7, 8, 9, 10, 11, 12, 15 seconds, which is incredible.
16
So let’s test this. What I want to do is test and see how good the quality is in 3.0 with
17
some usually difficult things to animate. So let me show you. I’ve got five images here
18
we want to test. Now, the first one here, I’m going to use this image of a pride of
19
lionesses running, walking animal movement. So that’s going to be difficult. That’s always
20
a tricky one to do. The second one is I have this woman waving to us, also a bit of rain
21
on the window. So a bit of a double whammy for AI video here, but waving and keeping
22
it looking without any morphing, looking natural. Let’s test that. And then cars. I’ve got this
23
car drifting here in downtown Tokyo, wheel movement, car movement, as well as background.
24
And it’s at nighttime. So quite a lot in there. The next one, I’ve got this man wrinkled
25
face, try and make it not look too plasticky, which sometimes happen to AI video tools.
26
And smoke smoke drifting used to always be the curse one, it would look like a chimney
27
is going crazy quite often. So let’s test that. And then the last one, we’re going to
28
follow along behind this woman walking. And let’s see the movement of hair. Having covered
29
these difficult things here, these difficult things that have been traditionally, sometimes
30
morphing or not great with AI, we can test this newest model out. So let’s do exactly
31
that. Let’s start this right now. If I drag image one, I just pop it into here. This was
32
the pride of lionesses. So I’m going to just do a very simple prompt with this, I’m going
33
to say pride of lionesses run forward, camera follows. Now optional, I can also add my end
34
screen and it started here, or I can flip these and I can have it end on that. So I’m
35
going to start with this shot, have this I’m actually going to for this one, just toggle
36
off and make sure there is no multiple shots, just one shot for this. But I’ve prompted
37
for that. Anyway, I’m going to toggle this up. And let’s do 15 seconds. I want to see
38
how good this is for a 15 second clip. And I just want to have one output on here. Now
39
native audio is selected. Anyway, I could also add elements into this, which we’ve covered
40
previously. And I don’t need to adapt my prompt with anything like we had done before. So
41
let’s run that and I’m going to test the others while we wait for them to generate. Okay,
42
there’s running, let’s do exactly the same thing again, get rid of this, get rid of this.
43
And let’s do the second one, a very simple prompt, woman waving at camera, static camera
44
movement, light rain on the windows, and exactly the same thing.15 seconds, let’s generate
45
a car drifting car drifts in street camera follows people in background watch, you’ll
46
see there’s a very basic prompts I’m doing here. And I’m not doing anything to lengthen
47
these out or add to it. I just want to keep it as much as basic as I can with the image.
48
Now handheld camera movement. So there might be a bit of shake on this man smoking. I’m
49
not telling him where he’s looking. If he’s looking at us, etc. Let’s keep it as basic
50
as that still 15 seconds run camera follows behind this woman as she walks her hair bounces
51
naturally as she walks. Now, I’ve given the adjective bounce here. So let’s see that is
52
obviously what we use just speaking naturally. But let’s see if that triggers something quite
53
peculiar. I could have just left this and say camera follows and it would have been
54
very natural. I want to see with prompting with language like this. Okay, run that. And
55
now let’s check out these generations when they’re done. And we’ll compare them all side
56
by side. Okay, the first video has finished generating. Let’s take a quick look at this.
57
So the movements pretty good. They look a little bit plasticky. There is movement in
58
the background, you see the other animals there, and then more of a run towards the
59
line. But it definitely not a pride of lionesses run forward. The movement is anatomically
60
quite good. Look, especially the front one here, you’re focusing on the two sort of like
61
chest bone shoulder, they reach there, that’s pretty good. And the hunch even comes up slightly
62
skewed. But their coloring looks doesn’t look real, does it? They look a bit more highlighted
63
in the initial image, it blends and then gets slightly more contrasted almost see the back
64
ones aren’t covered in smoke, they’re quite dark and stuff. It’s nice movement anatomically
65
like well, I’m sure it’s like, great. Don’t know if it can be used like broadcast quality
66
type, especially as the front one starts to run, but the back one one disappears, and
67
the back one starts to still has the same similar distance. Anyway, it’s a good effort
68
for movement. And we’ve come a long way haven’t we since a video started but not 100% like
69
a six and a half out of 10 or so. Next one’s generated here. This was the woman that we
70
wanted to be waving and window rain on the window here. Let’s play that one. Yeah, it’s
71
definitely waves. I can hear the rain in the audio. She’s waving for the full 15 seconds,
72
which is what I prompted for woman waving at camera. She is waving is gentle, the wrist
73
moves well look at the wrist as it slightly moves and the coffee swells also as she does.
74
Woman in the background doesn’t seem to move although the man opposite her does. Oh no,
75
she takes a sip and puts a cup down. The hand is anatomically quite good like there isn’t
76
blurring and morphing. It does look real. You wouldn’t want to use 15 seconds of this
77
shot but that’s pretty good. The movement of the coffee and stuff. Nothing moves out
78
the window. I see some rain falling on the window pane slightly. That’s pretty good.
79
That’s good. Okay, let’s see this next one. This was difficult. This was the car drifting
80
I wanted movement. Let’s see here. The wheels turn. Well, it’s drifting in slow motion.
81
Is it slow motion or is it just bad motion? Oh no. And then it speeds up at the end. Yes.
82
So it’s like, here’s a slow motion scene. So the audio should be slightly more slow
83
motion than deep, I think. And then it turns all the way around. And then it’s like we
84
hit back into back into what what and traffic comes back in. That’s pretty cool. That’s
85
really cool. I like that shot. That’s good. Lots of fun doing that. Okay, smoke drifting.
86
This was always a difficult one to get right. Let’s have a little look at this. I don’t
87
know about the smoke. That’s okay. And he takes the cigarette out of the mouth and a
88
bit of smoke comes from his mouth. But initially, at the start of that shot, if we watch, there’s
89
a lot of puff puffing smoke coming out. It’s so much better than a like a year ago, this
90
would have been absolutely terrible and just looked completely fake. At least you can look
91
kind of real. It’s just you want to use parts of this 15 seconds, I think not all of it.
92
But he doesn’t look plasticky. His face looks keeps the wrinkles and the darkness where
93
it should be. That’s very realistic. And this was handheld camera. You see it moves jittery
94
like that. That’s really well done. It’s one of the better smoke videos I’ve ever seen.
95
Although this initial part might start from here. It’s really actually good. I like this
96
that’s well done clean. That’s really good. Okay, and then we have the woman walking and
97
I said hair bounces, remember, which is kind of how we’d say in conversation, but I don’t
98
want it to actually bounce. Let’s see what cling picked up for that. Okay, movement.
99
Yes, yes. Seems like there’s some wind blowing it, which is pretty good. And it’s definitely
100
got realistic movement. The lights. Oh, that’s nice. And it flicked over almost entirely
101
on the left hand side. And the lights hitting it at different ways like it is on her jacket.
102
All the background movement is good. Not no one morphs and disappears and things that
103
guy’s shirt possibly went a bit skew. The woman walking in shot went a bit funny there.
104
But the concentration is going to be on how you can imagine her in a movie. And we’re
105
following along. That hair does look very good and realistic. It’s nice. Really, really
106
well done. So the shots are great with cling. And I put them to the test now with some really
107
difficult things here. I think the the main thing we’re going to want is the 15 second
108
length amazing and possibly with multi shot enabled. So 15 seconds is pretty long for
109
a single, a single clip on most videos. If you had multi shot, then it may change over
110
to other cuts within that shot, you know, when you’re prompting or if you did it with
111
custom multi shot. But the fact that we can have 15 seconds is incredible. And this is
112
high quality. I think cling right now is leader with VO. But I think cling slightly more realistic
113
in some of these bits. Really, really nice. Really good. Okay, let’s move on to the next
114
lecture now cling 3.0. I want to talk to you about multi shots.
— KLING 3.0 – Multi-Shot —
1
Now, still in Kling 3.0 under video generation, I want to talk to you about this down here,
2
multi shot, which I’ve enabled now and you can do custom multi shot. So what that is,
3
is basically you can upload as I will, I’m going to upload this image. So here’s two
4
people in a coffee shop having a conversation of sorts. And rather than just have one image
5
to video right here to one shot, I’m going to direct it and to have multiple different
6
shots ie you could have this shot and then you could have a shot over the shoulder looking
7
at the man over the shoulder looking at the woman as they have a conversation with dialogue
8
all from one image, one prompt is the aim for this. So rather than have to do this take
9
and then manually create your own image or video over the shoulder here and here, you
10
should be able to in theory, we’re going to put to the test. And of course, this will
11
get better and better be able to just do this from uploading one image, if I just drag that
12
image and put it in here, one image, and I’m able to have multiple shots. So I can have
13
multi shot enabled here and just prompt for it, I can come over and do custom multi shot.
14
So right here, I can add shot one shot two, and I can keep plussing shots for the 15 seconds
15
that we’re going, you see it’s divided here, seven seconds, eight seconds, and so on. So
16
if I just add a prompt in here, so here are the three shots I want to do. The first one
17
here is this shot I’ve got right here, wide shot camera very slowly moves in the man says
18
whispering and looks around. How did you find me? How did you even know where I was?
19
So I’ve given it the shot right here, which is the opening shot right there. And the way
20
he’s speaking, whispering, he looks around, let’s see if it does that, then it should
21
cut to over the shoulder of the man close up of the woman’s face. She says confidently
22
with a smirk. We have our ways, Mr. Broad, anyone can be found. And now I wanted to flip
23
to over the shoulder of the woman close up of the man’s face. He says a little worried
24
and softly, and the money, I assume you’ll hear about that. So it should be a short scene
25
here. Well, obviously, this woman, some kind of detective, I don’t know, or working in
26
a gangster, I have no idea has found this man. But to give you an idea of how to do
27
this, so there’s going to be 15 seconds, you can see I can click into here. And I can actually
28
just change the duration of this, I’m assuming it’s going to be something like eight seconds,
29
three seconds, four seconds, something like that. You can always have more the end, beginning,
30
then you can play with this and try different lengths. So let’s play with this, shall we?
31
You can also obviously end your end frame if you want to start and end on this shot,
32
then you could. But I want to put this to the test, because if it works, and if it’s
33
great, then now we’ve got whole scenes built off one image, which is incredible. And that’s
34
the future. So let’s run and generate this. Now while that’s generating says a few minutes
35
right here, there’s actually a cool feature in here where apparently I’m able to do this
36
in foreign language. So if I come over just chat GPT or something, and I say translate
37
these to Japanese, these are my three lines that I had in here, I can add those there.
38
Let’s hit run. And let’s get these translated into Japanese. Now I do not speak Japanese.
39
I do not know if they’re going to be accurate. That’s a whole nother thing right here. So
40
I’m going to copy and paste these in as are in the right place. Anybody watching this
41
who is Japanese can tell me how good it is. Let’s run that there. And I’ve replaced all
42
three of these. So let’s run that once again, I’m not going to calculate on accuracy of
43
Japanese just to show you apparently this is possible. Let’s run and test that. Now
44
I haven’t obviously you could have described I’ve described she says confidently with a
45
smirk, he’s whispering, etc. I didn’t say accent, he’s got an American accent, he’s
46
got a British accent, Australian, he’s got a husky voice, etc, which you could go further
47
into it. But let’s just test and see how well this does on this tutorial with multishot.
48
So the first generation that is an English language has finished. Let me give you a quick
49
play of that. And let’s have a watch. So the camera moves in nice and slowly from the wide
50
shot he looks around. How did you find me? How did you even know where I was? We have
51
our ways, Mr. Mr. Broad, anyone can be found. And the money. I assume you’re here about
52
that. Okay, so I think maybe that I prompted with double Mr. on us to have our ways Mr.
53
Broad. She said it twice. So the down points are lip sync when you’re using this multi
54
shot maybe slightly off. That wasn’t amazing with regards lip sync was it? But what have
55
we got here? We’ve got one prompt of one image. I’m not sure the person let me look at the
56
face here. And her face. Let’s check that over. Okay, that is him from this shot when
57
he turns his head right here. But I’m not sure that that’s the same guy as this. So
58
with consistency, you may not want a wide shot that’s like this. But we have got from
59
one text prompt breaking down the shot here to this wide shot that will from one image
60
that we’ve prompted in and added right there. We’ve got direction to have over the shoulder
61
over the shoulder conversations. If you didn’t need text at all, this would be flawless and
62
perfect. If you do need dialogue, I mean, then yeah, some of that lip sync was off.
63
But you could regenerate it again, it could have just been this soul generation. What
64
we have got here is an incredible thing that’s going to change filmmaking with AI forever.
65
Right now, this is the first part that people wanted, I essentially want to add a scene
66
and to be able to do multiple shots from one image, one scene, a lot less effort than
67
going into making each individual shot, reverse shot out of an image and turn that to text.
68
When this is flawless, and I’m sure there’s going to be updates every single day to weeks,
69
that’s going to happen, it’s going to get better and better as it learns more, this
70
is going to be game changing, it’s going to be really good. Now if especially if you’re
71
doing something animation, then it’s probably going to be even better. This is ultra realism.
72
If you are going to do something that doesn’t need dialogue even better or documentary style
73
like that, this is going to be game changing for you guys. So let’s wait for the Japanese
74
one to finish generating right here and let’s take a look at that. Now the Japanese one
75
has finished. Let’s give this a little play and a watch.
76
That was incredible. That’s amazing. Obviously the lip sync is slightly better than the English
77
one there. But the way obviously there’s longer speech to say it in Japanese by the
78
sounds of it. But look what it’s doing. How good is that? That’s so cool to watch. Really
79
nice. Okay, I love this a lot. As soon as the tool and it may just been this generation
80
with the lip sync. As soon as this tool gets a few bugs drawn out and it may just be this
81
shot, it may already be incredible, which I think it will be. And again, if using animation
82
or no dialogue needed, or even if the dialogue is more simple or whatever, incredible tool
83
multi shot going to change filmmaking with AI. Love it. Really, really love it. Okay,
84
next I’m going to talk to you about another feature in 3.0 for consistency cloning with
85
elements. Okay, let’s discuss.
— KLING 3.0 – Consistency with Cloning – Elements —
1
Now, let’s talk about consistency using a little bit of a new feature of cloning inside
2
here. So previously, you’ve seen me talk about using elements where we can swap out someone
3
or an object, you could swap out and you could clone yourself, the consistency wasn’t great,
4
and the voice also wasn’t great. So what if which you can now do in 3.0, you can pretty
5
much upload as long as you have permission. So in this case, I’m going to do myself very
6
important you have permission, upload someone to clone, that’s their image and their voice,
7
and then be able to dump that information that it has onto an image to create more consistency.
8
Let’s see if it works, shall we. So the first thing I’m going to do is I’m going to upload
9
and add in this image. So here’s a man and nights look like walking through a burning
10
village right here, face front on, let’s swap that person to be me. So if I just drag
11
and drop that image right there into here, you’re going to see that pops up below this
12
once it’s uploaded here, bind elements to enhance consistency. So I haven’t done this
13
before. So I’m going to show you step by step as if you are doing it for the first time,
14
click on here and click Create. Now, here’s everything all the guidelines to make sure
15
that you have permission do only do this with someone you have permission for, click
16
Accept. And here you can upload a front facing character video image, I’m going to actually
17
upload this clip, you can also be really, really quick. Let me show you now this workflow.
18
A good short clip, I think eight seconds is the three to eight seconds, which isn’t that
19
much data. But that’s just a short clip right there. And I’m front facing. So let’s click
20
and add that add video. Now I can add voices. That’s the original, you can also be my voice.
21
If I wanted to, I could choose someone else’s voice right here, enter a character’s name
22
here, let’s just call this me. And then I can hit generate. And now it has me stored
23
right here. So if I add in a quick prompt, so I’ve got a handheld camera shot, the man
24
at me, which I’ve already linked, make sure that’s linked and selected up here says where
25
are they? Well, the children, he looks around panicked and move side to side. So I haven’t
26
actually given it instructions on how to say it. But if he says it in a jovial manner,
27
then we know they haven’t got it because he looks around panics after that. And given
28
the scene, I’m going to actually move this, we don’t need 15 seconds, I’ll keep it, I’m
29
sure it’s going to do some strange stuff at the end or something, because I’ve given it
30
way too much time. But that’s okay, we can, you could have had him talking more moving
31
around, move to another shot, if I had multi shot, as we spoke about in the last lecture
32
enabled. But this should now be swapping out this face for mine with voice in here, probably
33
was going to have some background noise that is picked up on there. Let’s hit generate.
34
All right, that has finished. Now, this took a long time, like 30 minutes to do 15 seconds,
35
I have got my start got my element in here. But if I play this for you, where are all
36
the children? All right, let me just play this whole clip for you. Where are they? Where
37
are all the children? The children? Alright, so let me just get the original image up here
38
to compare that. It doesn’t look like essentially that there has been much of an effort made
39
to make them look more like me, the hair looks pretty much the same. And it doesn’t seem
40
to have changed the voice that much. So this was promised to be quite a bit better than
41
it was. I mean, yeah, the character does look real and it looks great. Does it look a lot
42
like me moving in there? No, I wouldn’t say that it does necessarily some work to be done
43
for this. And it’s quite an expensive and long generation to do it. It may be this clip,
44
it may be my face, the guy might look too similar to me naturally anyway. So you test
45
it with yours. But I do want to show you this feature because the better it gets as things
46
progress with cling, obviously, that’s going to be a game changer if you can keep the same
47
consistent throughout. So that was cloning with elements and how you create it. The results
48
not incredible, but not terrible. I mean, it looks really good. It says exactly what
49
I wanted it to say, which was great. And it has the background audio, etc. Really good
50
the consistency with looking quite similar. I’m not 100% sure. So let’s talk next I want
51
to go into Omni, which over here you’ll be familiar with and using some VFX to change
52
things in there. I’ll see you on the next lecture.
— KLING 3.0 – ONMI (VFX) —
1
Now, I want to talk to you about some stuff right here. Now, remember 01 that we spoke
2
about previous lectures. Now I’m going to go into Omni right here. And I want to show
3
you some basically some VFX things that we can do. I want to upload a video. I’ll show
4
you that video here. I’m going to use this video right here of this woman walking towards
5
us through what looks like Beverly Hills or something. I’m going to take that and I’m
6
going to do some things like replace character, replace color grade. Let’s change that change
7
the camera angle from the original video, change the color of her outfit, and do stuff
8
like change the weather. So if you have a clip or something that you want, and you want
9
to change something VFX style, let’s do that. So if I grab that video clip, and I just drop
10
it right into here, and I can just add a simple prompt, I’m going to change the woman
11
in that video, that’s the video there with and if I drag this image into here, this image
12
of me with the man in at image. So this is quite a big tall ask. And I don’t know what
13
the results are going to be. But you can obviously use your imagination to understand that it’s
14
swapping out me a man in a white jumper with this woman in Beverly Hills wearing a dress.
15
Now I’m going to keep this at five seconds, I don’t need any longer, though I can go up
16
to 10 here. And one generation, keep the audio everything the same. And let’s just swap those
17
out. Okay, that’s running. In the meantime, while that’s running, let’s do the next one.
18
I’ll show you all the results at the end. So this one, I’m going to remove this, change
19
the color grade in at video with the color grade of. And then if I remove that image
20
right there, I’m just going to add in this image right here, which is a blue hued image
21
of a woman. So I want to get the color grade there. Now that’s pretty extreme going from
22
sunny to blue style. So this won’t be I don’t think it’s going to be too realistic or, or
23
the kind of look that you’ll want to be understand that if you take something like a blue hue
24
slightly dull day, and you want it to be sun kissed, you could grab an image and do that.
25
So let’s test this. This is the extreme, you probably won’t want to do something like this.
26
But let’s run with that. Now this one I’m going to I have that wide shot of the woman
27
I’m gonna say change the camera angle of at video to be close up of her face. So this
28
now should be a close up of her face. I could also say slightly from the side from behind
29
or something. But let’s go with that and run. Now I’m also going to remove that last bit
30
and say change the color of the woman’s dress to blue, not telling that a woman’s dress
31
rather to blue, not giving it a kind of blue or hue or anything like that. Let’s just run
32
with that and go. And the last one I’m going to say change the weather in the video in
33
at video to be lightly snowing, which is extreme because she’s in a dress. But let’s
34
see what it does with that. Okay, and run those. And I’ll wait for all of these five
35
to generate. And then let’s check the results. Okay, that first one has finished and look
36
how good that it looks like. So remember, this one was taking the woman walking through
37
Beverly Hills and swapping her with me, you didn’t see what pants I was wearing or anything
38
like that. And that definitely looks like me. It’s taking the same framing and everything.
39
Let’s run that. I’m walking. It’s not the most ultra realistic at the start, but this
40
gets better. It looks more realistic from here onwards. I’m holding a bag right there,
41
which goes under my arm weirdly. But that just swapped me out. That’s definitely me
42
with the woman. And it’s the exact same shot, the same framing, the sun coming through everything
43
has just swapped it out. Perfect. Brilliant. Love it. That’s really nice. Really, really
44
good. Now the next one, if you remember was the blue hues, it was slightly extreme. This
45
looks more like music video. It’s slightly extreme is that it went from extremely blue
46
right down here from this one there. So let’s play that and have a look. The exact same
47
image and it’s changed it to blue. So that’s how her dress would look in that blue light.
48
I’m pretty sure that looks very accurate. And even the sun tries to come through with
49
this whole blue hue. So if you’re doing a themed movie that has a real feel to it,
50
think, I don’t know, Godfather movies are kind of yellow hue, Terminator, blue hue.
51
If you wanted that great sci-fi eerie kind of shot. But if you want to just change something
52
slightly like a sunny day into slightly different or twilight or something like that, then you
53
could do that. So the next one, I remember I just prompted and asked it for close up.
54
Same shot, close up. Amazing. So now you could go from your original shot where the
55
woman was walking through. So this shot we had down here, you could easily jump in without
56
zooming and and ruining the quality of your video. This is now that woman walking close
57
up. Same shot. Really nice. And again, I could have done that from the side. I could re-prompt
58
this and say from the side and from behind. And now you’ve got a wide shot of a woman
59
walking close up of her walking side shot and back. You could do a full mini scene of
60
her walking. No problem. Multicamera, basically. Now, the next one, if you remember, I said
61
change her dress to blue. So we got her dress right there and now we’re changing it to blue.
62
Let’s take a look at that. Swapped it out perfectly. Does it look very realistic? Yeah,
63
you’ve got the shading, the light coming in, the dark parts looks really good. I’ve just
64
swapped that out. That was so good. Should have swapped her glasses as well, really,
65
if I wanted her to match. But isn’t that incredible that I just able to swap out that?
66
No problem. And it’s not morphed in any way. Really nice. Now, the last one was light snow.
67
So the same shot and now walking through the snow. And it has absolutely just changed the
68
exact same shot. It’s got snow on the cards, snow on top of the lights, their snow on the
69
ground that she’s walking. She would be freezing, I think. And it looks perfect, even clearing
70
parts of the snow footprints already on the pavement. Wow. Really nice. I’m a really big
71
fan of Omni and these VFX you can do basically with swapping out and changing things. It’s a
72
real game changer for your video to make small changes or big changes in the cases I’m showing
73
you here. Kling 3.0 with this has done an incredible job. It is leading the field in this
74
kind of thing to be able to swap out whether I’m changing the weather, whether I’m changing a colour
75
of an object, just changing the camera shot, changing the hue, changing the person even
76
really, really great. Wow. I’m blown away. Super good. OK, that was the updates for 3.0. Obviously
77
also had one in images. You can go and check out Kling has images and 3.0. That’s in the
78
image part of the course. I’ll see you in another lecture really soon.
— Adobe Firefly Video —
1
Now, this is an update lecture. I’m adding this after I’ve completed the course. I said
2
I will always update you. This is one of them. Adobe Firefly has released. Yes, this is exciting.
3
You’re in the video section. So of course, you already know, has released video. Maybe
4
by the time you’re watching this, it’s been out for a while, but this has just been released
5
and I’m going to play with it and show you. So there are a few things here. There are
6
some things still coming. Text to avatar. That’s like a talking head, as well as enhance
7
your speech. There are three things I want to try here. Text to video. So just like we
8
see in Sora, I don’t really like using it on any other platform. Sora is meant to be
9
okay at it. And it was, it is pretty good, isn’t it? A little bit of morphing, but not
10
too much. Image to video. Does it compete with Runway for that? That’s the best model
11
for image to video. And then there’s translate video, which is pretty cool, right? To be
12
able to take something and translate it completely uploading a video. So let’s test these three
13
parts out. Okay, let’s start first with text to video in beta mode right now. So you just
14
sign in with if you have an Adobe login or create one and stuff, and then let’s go over
15
start on the right hand on the left hand side. Sorry, I always mix up right and left. There’s
16
something in that. So model Firefly. This will eventually, of course, have different
17
models. I expect Firefly video model. I expected in here, you’re probably able to do image
18
and everything else, or they’re going to have as they get released. And we see inside
19
Pika Runway, etc. You have different versions of the same video model. Right now we just
20
have that. Now let’s choose our aspect ratio. Do you want it widescreen for YouTube? I’m
21
going to for this or not or any other video platform like that or portrait for shorts,
22
reels and stuff. Let’s keep it at widescreen. Let’s do 24 frames a second, which is all
23
you can do as the time of recording this. So this is going to be quite familiar. Do you
24
remember back in the image generation section where we used Firefly for image generation?
25
They’ve got a lot of the controls down here on the left. So I can do stuff like, OK, is
26
this an extreme close up, a close up shot, a medium shot, long shot, extreme long shot.
27
I’m going to do a medium shot. Let’s start with some of that. Now, my camera angle is
28
an aerial shot. I level high angle, low angle top down. This is brilliant because I’ve struggled
29
so many times to get low angle video shots inside when I’ve been trying to generate for
30
imagery, et cetera. You saw me talking about that earlier. Let’s do an aerial shot, actually.
31
And what I want it to do is don’t want it to do. Yeah. OK, let’s make it zoom in. Let’s
32
tell it to do that. OK. Anything else with the seed? We don’t need to. We’re going to
33
do this right now. Talk about in the future, perhaps if you want to see this, which is
34
where you can pretty much store seed and use it again and again. Not important for this.
35
Let’s do an aerial shot over the city of London. Bright, sunny day. Now, I’ve already told
36
it’s going to be a bit of a zoom in. It’s a medium shot. I don’t mean medium shot.
37
I want it to be a long shot and everything else is set up. Sixty nine. OK, let’s generate
38
that and let’s generate it. I was actually quite quick. That was probably as fast as
39
runway. Let’s have a little play of that, shall we? OK, nice. It’s almost got this time
40
lapsing traffic right there. I didn’t specify anything about that. We’ve got a five second
41
clip. OK. All right, great. And right now, if I click over here, I can play back speed
42
picture and picture. I can do this. I don’t need to do any of that. I can also see my
43
generation history, what I can’t do right now. And it’s probably going to be coming
44
in the future as stuff like to extend this clip to add something in which things like
45
a peeker and runway are obviously a little bit ahead of. But you can already see this
46
model is quite good, isn’t it? That if you just had one suite, just the and that’s what
47
all designers have. If you had the Adobe suite and you already have a image and video and
48
then sound and everything else all in one. Really nice. I mean, this does look like London.
49
It’s got a bit of an old church there. This is definitely the Gherkin from London. We
50
call it this is the we call it the toaster or the radio walkie talkie building. Definitely.
51
These are the old tops near works like Waterloo or something. This traffic here, I could specify
52
traffic, no traffic or whatever, but it definitely is that drone shot coming in here. It’s a
53
zoom in shot. Definitely. A long shot. Definitely. It’s 69. Yeah, it’s everything that I asked
54
for. Perfect. Really nice. I do love all Adobe suite stuff. They’re always so good on me.
55
Let me go back. So I want to do a text to we don’t text if you want to do image to videos
56
or I’m trying to say too early in the morning recording this. So I want to again image first
57
or do image to video. Let’s actually just use an image in here, shall we? So let’s do
58
this. Just generate once again, just like before. Let me just set this up widescreen
59
Firefly image three will be good. Let’s do content title. Let’s have it as a photo. I’m
60
keeping all this I’m going to hyper realistic. Let’s say describe the image you want to get
61
a man looking at camera directly at us in New York City cinematic. Okay, let’s just
62
generate that not too much detail, but that’ll do. Okay, this is quite nice. Let’s go for
63
one of these. Yeah. Okay, let’s go with this one. It doesn’t really matter. I just want
64
to test the capability of this versus text to video. Let’s download that. Okay, and let’s
65
come back out. Let’s go back to video and let’s go image to video. Now the eagle eyed
66
ones of you will realize it looks exactly the same and actually this part was also on
67
the previous page. So it doesn’t really matter where you’re going from. You’re going to be
68
able to have a generating video either use it as just straight up. Let’s just do text
69
to video or image. So let’s hit here. We’re going to select our first frame. I assume
70
in the future and it’s grayed out. We’re going to be able to select our last frame just like
71
we can in the other video models and maybe by the time you’re watching this you can so
72
I’m just going to upload. Yep. Yep. Yep. I agree upload this now. This is the this is
73
the test here, isn’t it? Because we know that Sora has a lot of limitations or struggles
74
with people you’re uploading not trusting that it’s not a celebrity or someone you don’t
75
have permission to so I’m going to do this. I’m going to go man smiles at camera. Let’s
76
just do that. Let’s see if it sometimes a our model was mistake this and put a camera
77
in shot and other stuff, but I mean at us but man smiles at camera. Let’s just leave
78
it at that. I haven’t told it do anything else. No expressions nothing in the background
79
waving anything. Let’s just do that hit generate and that’s generated again. I’m very impressed
80
with how quick Adobe is doing this. Let’s play and see if man smiles at camera. Let’s
81
see what this does is zoomed in and I didn’t ask you to it does smile. Okay, his face does
82
warp ever so slightly is it? Oh, I can see this is good. This is a that’s why I like
83
doing these live. You can see a mistake just like on runway if I leave on another setting
84
remember when we went into camera and left to zoom in I left zoom in on I should have
85
said none. So I left zoom in on so he zoomed in on him good to know it’s doing that I could
86
have deselected it like that and it wouldn’t have moved. It does zoom in on him nice and
87
then but look here the gray he does sort of change face shapes. I guess there was a bit
88
of morphing but that’s pretty realistic pretty good great stuff. I love that it can do people
89
as well without worrying like the trouble with Sora right now so we can animate people
90
with this which is perfect. I can do text-to-speech and have a great shot here created for me
91
or upload an image and animate it as I see fit brilliant. The last thing I want to test
92
if I come back to here. Let’s go back to video and I want to do translate video. This is
93
very interesting. There are other tools that can do this of course. So let’s get started
94
and let’s do this. I’m going to just upload a file here. That’s uploaded. Let me just give
95
you a quick play of that right now. Turn this up for you have many lectures and show
96
you many many tools. So we’re going to go through creating AI images AI video audio.
97
So just a quick test clip. That’s okay. I’ve over here on the left. I’m selecting my source
98
language auto-detected English, which was correct. And then I want to translate it to
99
let’s do French. That sounds nice. Okay, and then let’s do a speaker one. So this is what
100
I say. So we’re going to have many lectures and show you many many tools. So we’re going
101
through creating AI images video audio. Perfect. Generate. Okay language French. Let’s have
102
a little play and listen.
103
Nous allons donc donner de nombreuses conférences et vous montrer beaucoup beaucoup d’outils.
104
Nous allons donc passer par la création de vidéos AI d’audio.
105
So I don’t know if it’s doing AI. Maybe that’s how they do AI. Maybe that is how they say
106
I need a French person to let me know but isn’t it exciting? I could now take all my
107
courses and just put them into any language. I want to let me go back and test and see
108
what languages there are if I just do it again translate into English English Spanish Spanish
109
Spanish French German Italian Portuguese Japanese Norwegian Korean Hindi. This is brilliant,
110
isn’t it? So I love that feature up. That’s that’s that’s great. What do I think about
111
Firefly? I think that absolutely it’s just in the beta stage of AI video. It’s going
112
to get better and better. The images we know are fantastic. It is probably I love mid-journey
113
but also Firefly is great and if you already have the Adobe suite for editing like I do
114
for Premiere Pro or anything else then great. You just use the images from here in Firefly
115
also use Photoshop all in one and now videos out audios come and we’re going to it would
116
have translate everything in one platform. You could just have one subscription to one
117
model here for Adobe and have everything that you need. Very exciting. So that was
118
Adobe Firefly video update lecture. Let’s continue and have a look at some other video tools.
— Luma Dream Machine Overview: Get Started —
1
And I’ll show you another tool that’s very popular that’s changing quite rapidly lately
2
is by Luma Labs, Luma Dream Machine right here. So if we go back onto our page, you
3
have access to AI video dot school AI video tools. And then if I scroll down right here,
4
Luma Dream Machine, do the drop down here is access where you can directly just click
5
and go through to site. And also here’s me breaking down for you everything that’s inside.
6
Now I just need to give a note that even whilst I’ve been recording this course, the Luma
7
interface has changed and some other features added. So it might not look exactly the same
8
as I’m showing you, but this is pretty updated. And if there’s any significant updates, obviously
9
I will show them re-record or have it here. Now you can have the account set up. You can
10
either go in and do the free trial that they offer. It’s quite slow to generate on the
11
free trial, but gives you a good idea of what the capabilities of it are. Or like I’ve
12
done before and lots of people I know, you can subscribe for nine nine to nine a month.
13
That’s the price right now. And you can cancel any time. So you could just subscribe for
14
a month for nine ninety nine, give it a full test, have a big play with it. And then if
15
you didn’t like it, you’re not subscribed anymore. Cancel any time. So when you go through
16
here to Luma Dream Machine, this is the main page for Luma that looks like this. There’s
17
explaining loads of their tools, lots of nice information on here that you can go through
18
and you can check out everything that’s available because once again, this platform is huge.
19
Dream Machine or Luma Labs as such is massive and there’s lots you can do with photos, videos
20
and everything else. So I’m going to cut through a lot of the stuff you don’t need. And for
21
the sake of this video and for the sake of this course, we want to generate video, obviously.
22
So I’m going to concentrate on that. So let’s go through to Dream Machine. The URL for that
23
is Dream Machine. Dream hyphen machine dot Luma Labs dot AI. And you’re going to come
24
to something that looks like this. Let me just make that a bit smaller for you. So over
25
here and once again, it’s listed right here. Getting started, you can do boards or ideas.
26
So over on the left, you can have boards or you can have ideas. Here’s if you click on
27
ideas, everything you’ve ever generated before. So here’s some stuff I was playing with someone
28
walking through New York City, an evil bunny playing by a waterfall. Here’s a big guy sat
29
here. It looks a bit like The Rock. I also played with some Mark Zuckerberg and Elon
30
Musk UFC stuff to see how it was doing with visuals. It was OK. The movement was pretty
31
good. Not flawless, but pretty good. Zoom in. Donald Trump as Rocky, which did not go
32
well turned into a female in the end. But I’m going to show you this. Now, this is not
33
my favorite compared to runway, which we’ve seen. But a lot of people do love this and
34
the accessibility of it are great. So that’s if you were going in ideas and I could just
35
here, I could just start doing this. I could put in my text if I want to do text to video.
36
I could then say it’s definitely going to be a video. And here’s the aspect ratio. And
37
I could do that. Or I think the better way to do it is to come into boards. Think of
38
these as your projects. So if I was making one video, I might have one board or perhaps
39
one character. I was making one board and I like to work that way. You may do. You may
40
not just click here first and let’s go to I want to make a new board. And what is it
41
that you want to do now right here? And I’ve detailed this here is what you can do. Upload
42
an image, generate inside Luma, etc. So you can either tell it to I want to generate an
43
image first, and then we could make that into a video. And we’ll do that. Actually, we do
44
that in a moment. And I can set right here, I could upload an image if you already down
45
downloaded one from another platform. Like if you were using mid journey, we’ll also
46
do that in a moment. And then yet set my aspect ratios wherever I want it vertical for things
47
like reels and social or 16 nine for regular video really wide if I’m making a Western
48
or something. So let’s start with I guess step by step. Let’s do this. I don’t want
49
any of these ideas. Thanks, but we’ll come to this in a moment a character. So I’m going
50
to create an image at first let’s do a man walking through New York City, sunny, bright
51
daytime, busy streets. And now I would say this aged 40 in a suit. So the kind of general
52
overview for prompting and whether it’s prompting text to video text to image or image to image
53
to video is pretty much used natural language inside Luma which is a lot like chat GPT or
54
any of those other models Dali that we used earlier. So use a conversational style, a
55
good prompt a man walking through New York City realistic, better a realistic video of
56
a man walking through New York City is a sunny bright day. Just for example, so conversational.
57
So I’m going to say a realistic image of a man aged 40 in a suit walking through New
58
York City bright sunny day. I’m asking if I 69 and an image this time and we’re going
59
to turn that image to video in exactly the same way you could have clicked here and image
60
that you wanted say you made it in another platform and then you want to turn that into
61
video. Let’s do this. Okay, and it’s finished dreaming that was actually pretty quick. You’ll
62
find if you’re on the free trial, it’s a little bit slower. Here’s my images I’ve got right
63
here. Nice with the movement. I quite like this composition. It’s a bit eerie. Also,
64
let’s go through some of these is a man walking through looks like the center of Times Square.
65
This man stood still and this man is walking. Great. So what I have here is I can either
66
modify that and I can change it. You modify it not like we’ve seen before where I’m doing
67
in painting or anything, but through description if I want to and it’s on modify. Okay, and
68
I can also do things like give it its first and last frame and stuff like that. I don’t
69
want to do that. Let me just turn this make it into video. And I’m also going to let me
70
just close these and make sure let’s make this into a video. I’m going to say man walks
71
camera follows him a little bit more conversational than I would use perhaps in other tools. Are
72
you just seeing me use let’s have a little look and see what it does there and you’re
73
going to see some of the limitations now, it’s not on the page anymore, but you would
74
have seen me earlier in the course show you some of the limitations. We had a whole lecture
75
on it limitations with AI and Luma Labs was really good at showing limitations that it
76
has you saw some of that morphing some of that movement hands weren’t quite realistic,
77
but it’s not bad Luma pretty pretty good and getting better and better. Let’s see their
78
video. Alright, let’s check this out. Definitely a man walking but look there’s not much realism
79
on his hand. Here’s a little bit more. This woman is about to walk through that person
80
and that more slightly there, but it is definitely a moving image. Let’s take a look at the next
81
one and see if it’s any better. Oh this one. We’ve got like it’s quite nice that the camera
82
like moves but look at these people just morphing right there, but he’s walking through and
83
the camera kind of jolts like that realism like I’m following him. Okay, great. Now instead
84
the other way of course is to upload an image. I’m going to do that and you probably remember
85
this image from earlier in the course. It was a drone shot over London. I’m going to
86
say a drone flies over London and we follow and that’s it. That’s all I’m going to give
87
it actually. Now if I had an end shot, for example, if I generated two images, I could
88
add the end shot. For example, if I made this right by a skyscraper or something, it would
89
go from one to the other. That’s key frames. If I need to modify this image, I could do
90
but we’re just generating a video right now making sure I’m still in videos just to make
91
sure and then 69 is what I want. So many times you do this and you’ve got images selected
92
and it’s really frustrating. So let’s upload that and let’s see what the video does. Okay,
93
that was quite fast. Let’s have a little look. So this one is moving forward and we’re almost
94
going to go past the drone. That’s quite nice actually. And this movement is good. There
95
isn’t any more thing. They definitely like their propelling not flawless, but it wouldn’t
96
be if a propeller was going. So that’s believable. And this nothing morphs in the background
97
or anything. That’s nice. Oh, and this one goes straight past. Then there was some kind
98
of bird drones or something comes into shot, but it lifts up if they weren’t on there.
99
That would be quite nice. Now I can modify this. Of course, I can extend the video if
100
you want to extend it for a certain amount of times you can do or let me go into here.
101
I’m going to say modify if I go back to here. This is our page for this for for Luma Labs
102
Dream Machine, especially if you go to some advanced features, let’s go through some of
103
these. So modify you can modify this. I’m going to say and just to make sure I’m still
104
in video, but it should be I’m going to say night time dark horror thriller, just giving
105
it some adjectives right here. So it knows scary. Okay, let’s modify and let’s check
106
out what it does. So unlike some other tools, there’s no camera movement with regards like
107
we saw inside runway in different models where you can tell it to do but it’s all in descriptive
108
nature. So a lot of people like this if they want to be less technical, because they’re
109
able to just be conversational, but ultimately gives you slightly less control or you may
110
have to do more iterations to get exactly what you want. But it is very responsive.
111
That’s the bonus of Luma Labs Dream Machine is that it is responsive to your conversational
112
modifications or prompts. Alright, let’s take a little look at this. It’s definitely night
113
time, definitely slightly scary. We’re going over the top of the drone. Oh, cool. If I
114
extended that, I better go straight past it. Let me just extend that. And let’s look at
115
this other one right here. These propellers don’t look like they’re moving or perhaps
116
it’s that thing where the frame rate doesn’t catch up a bit. So it doesn’t look like they’re
117
moving. They disappear a little bit there. Alright, let’s have a look at this extended
118
version. So now we’re a nine second long clip. So it’s going over over that gets a bit like
119
a bridge. And then we’re actually on a bridge. So that was a little bit of morphing happening
120
there. Let’s see what this one does. Alright, that one doesn’t go into the bridge. That’s
121
quite nice. That’s actually quite a nice image depending on what you use that not ultra realistic,
122
but is a good good video. So you can do things like styles, which is exactly what we’ve been
123
learning. Of course, you can use things like anime, cinematic, cyberpunk, all that stuff
124
we’ve been learning before. Now, the next thing I wanted to show you, which will take
125
some working is this character prompt. So you should be able to have consistency amongst
126
your characters, which is something that I think Dream Machine is going to get better
127
and better at, but isn’t quite there right now. So what I’ve done here is uploaded the
128
image of myself just through here uploading it, put in at character, which even if you
129
go to Dream Machines guide, this is what it says, type in at character. I don’t really
130
use this because I use a runway and it’s fine. I’m using obviously I’m using mid journey
131
and creating my character seamlessly that way. But you should be able to but it struggles
132
slightly with keeping character consistency or is the character but it’s not doing what
133
it said. So you should be able to add the at character followed by your prompt. So and
134
I have tried this if I’ve named the character and uploaded and everything else. I did at
135
character in the desert flying a kite. So Dan, a man with a trim beard is seen flying
136
a kite, sunlit desert, created vibrant visuals. Okay, but I’ve got a kite that appears above
137
my head, turns into someone else and looks at it. Still in my studio here and I find
138
that quite hilarious, obviously with that image right there. So let’s just try in the
139
desert and I’ve got a little shot of me in a wink and one zooming towards me and my hand
140
is heavily morphed as I touch my hair. So it may be that if I uploaded, remove the background
141
and just had a plain image of me, which is one more step to do. You could definitely
142
be doing that. I’m not 100% convinced by it yet, but it is getting there. Now, of course,
143
add your styles like we mentioned, aesthetics, anime, cinematic, visual references. If you
144
use an image as a style guide, type at style followed by your prompt. So if I use that
145
same image, let’s go there. Let’s upload that image again. And let’s go at style, a woman
146
with long brown hair at her desk. So it should be using this image as a style, which looks
147
like this camera, the background, etc. And that lighting to this time create a woman
148
with long brown hair at her desk. Now we see it’s not used the style for this. Well, it
149
has, I guess, but it’s definitely not a woman. It’s just more images of me. I turn into a
150
woman there and I stand up and there’s a lot of morphing happening. So maybe not for this
151
shot. I would use Luma Labs and if you need to use it, if you’re not able to generate
152
with runway for whatever reason, it doesn’t like your prompting, you could use it in
153
here. It’s a little bit more relaxed for a lot of things like drone shots, things over
154
cities and stuff like that, that doesn’t not going to have the effects of morphing or anything
155
like this, which we see with some of these. Now there are a couple of other things I wanted
156
to try here. Camera motion and also looping is interesting. If you’re using social media,
157
sometimes you want your image to loop, loop, loop. So people watch it multiple times or
158
to look like a seamless image. Let’s try both of those things here. Let’s just say looping
159
video. Nothing else. I would give anything else. That’s all it has to go on. And we should
160
have a loopable video that starts and finish at the same point. Pretty much that means.
161
And then the other thing I wanted to try over here, I wanted to do the camera motion like
162
pan orbit or zoom. So let’s do that again. Zoom. All right. And a modern workspace inside
163
of the camera gear, creating a warm professional. It’s a looping video. Let’s have a look at
164
this. Yeah. So it comes back to exactly the same point. There’s always that little jolt
165
as it comes there, but it does loop seamlessly from one to the next. Let’s have a look at
166
this. A lot of morphing happening, but comes back to that position to be able to loop back.
167
Now I’ve asked it to zoom in. Definitely does exactly that. Let’s have a look at the next
168
iteration or pick up another coffee cup and drink. I’m zooming out on this one. OK. And
169
just a note to remember that if I’m inside a board like this, it does remember earlier
170
generation. So when I am doing things like turn me into a woman, I might have to do multiple
171
iterations as it remembers previous iterations that we’ve made inside that board. You could
172
instead be using straight in your ideas tab as opposed to a board if you wanted to. Luma
173
definitely has a place and it is growing at a rapid rate and it’s going to get better
174
and better. And there’s loads more to this tool. I’m just showing you the video generation
175
for this section, but I have to show you everything. And this is one of the big guns inside the
176
A.I. video generation space. So let’s look next at another tool.
— Haiper Overview: Quick Video Generation —
1
Another extremely popular and quickly growing AI video tool is Hapier. So you can get this
2
if you go back over to our site, AI video school slash AI video tools, and then you
3
come down to the drop down menu here under Hapier, you’ll see here you can just click
4
through and go to site or it’s at the bottom here also. And here I’ve got some step by
5
steps for how to use it and then also some details for prompting specifically inside
6
Hapier. Now if you want to they do have a blog that’s really good that goes through
7
a lot of the things like prompting and how it works. The tool is massive. There’s a lot
8
to do here lots of different products that they have. I’m going to obviously cut through
9
some of the stuff I don’t think that you need. And for this section specifically, we’re looking
10
at video, aren’t we? So we’re going to concentrate on that. So you can go through and check this
11
out, click through, you’ll come to a site for Hapier that looks something like this.
12
Again, this may vary slightly as time goes on any major updates, I will change an update,
13
but everything’s going to be pretty much on a page looking something like this. It’s very
14
clearly laid out and really nice top to bottom page. Here you got text to image, text to
15
video, image to video, and then here’s your other things like to extend it, enhancer,
16
and everything else. So we want to go to right now, we’re going to go to image to video.
17
That’s the main thing we’re doing. We’ve already created our images, you can do text to video,
18
but you’ve seen on all tools when I’ve been testing this, how poor the results are for
19
that. So let’s go image to video and click on that pop up appears and quite simply to
20
go through this, you can just check you’re in the right one. Yeah, I want to be image
21
to video. Sure. Use the newest version right now is 2.5. There might be a newer one as
22
you’re watching this. And you could be uploading multiple images. For example, you could be
23
doing your first image middle last so you want to go from one to the next to the next.
24
We don’t do that. Usually I like to get a little bit more freedom with this. And also
25
you can change the duration four seconds, six seconds. This will depend on your plan.
26
I’ll quickly show plans here. Free you get to try for free get 100 credits. You can do
27
this. I’ll do monthly $10 monthly $30. And you can cancel this anytime. So you could
28
upgrade from your free plan to your paid monthly, give it a test instantly unsubscribe and test
29
it out. Or if you like it, obviously, keep it and if you need more credits, then upgrade
30
that. So let’s go back to image to video. Best thing to do. There are other ways to
31
do this is I like to do full settings. And I scroll through all of these and I see them
32
but we’ll upload our image first. Let’s keep consistency among testing all these different
33
AI video products. And I’ve got this image of that drone shot across London right there.
34
Now, I need to describe and prompt for this about what I want it to do. So if I go back
35
to our page, I’ve described exactly how prompting works inside hapier. So you need to start
36
with clear and simple instructions for the scene or horse running on a beach. Obviously,
37
if you’ve already got the image, you can add that but you need to add some things like
38
jumping, exploding, aligned to the generated images and motion for better results. So it’s
39
even telling you there, it’s better if you have an image, for example, of an exploding
40
horse here, that would be horrible. But if you want to do that, whereas if you just put
41
up a picture of a horse and said exploding, you might have to do several iterations to
42
get that. Now use references. This is like our style guide, for example, in a style of
43
1980s grindhouse horror, that’s directly from their blog, that style you could be using,
44
of course, you could be saying cyberpunk realism, Western, neorealism, steampunk, surrealism,
45
whatever it is that you’re using. And we’ve gone through all of those earlier in the course.
46
Now add details and experiment with specific descriptions. Avoid metaphors like as white
47
as snow, ignore metaphors. Okay, actually, that was a simile. A metaphor example is more
48
like a man drowning at work. And you meant that he has lots of work to do. That would
49
be seen quite literally, and you might see him drowning at work. So ignore that. That’s
50
because the model is trying to look at exactly what your keywords are that you put in in
51
here. So test different variations. And you can try that you can also get cinematic, include
52
camera angles in your prompts like this is a drone shot, this is moving, we zoom out,
53
zoom in, low angle, all different things like that. So let’s go in and play with that.
54
So I’ve got this, I’m going to say a drone shot camera is moving forward following the
55
drone. Now you can have enhanced prompts right here. This is basically if I translate this
56
for you, it’s saying that you might put in lots of things and you might put in words
57
that aren’t needed. For example, there may be some bits that it’s ignoring. If you do
58
that, then the model is going to enhance the prompt basically for its own good. It’s not
59
going to suddenly add in with a pony in shot or something and add something completely
60
different is going to enhance your existing prompt to make it better fit for the model.
61
So I always pretty much keep that on. I’m going to make sure everything else is set
62
up right here. Yes, that’s nice. I’m using image. So if I was using text, a prompt, I
63
would have options on here like your aspect ratio and things. I can show you what would
64
be here. You’d have speed, duration, mode, and aspect ratio available if you were doing
65
text to image, but we don’t do text to image because we’re more advanced than that in this
66
course. So we can do that. And then I want to generate now just to let you know, this
67
is in a more relaxed mode because my plan is not really advanced. I’m not in pro plan.
68
I don’t use this that much. I use Hapier, for example, the same with Luma Labs. Really,
69
I have it as my backup main AI video tool that I use and then subscribe to. So if I
70
have a problem with runway that it’s not allowing an image to generate, for example, if there’s
71
an image that I’m trying to get to video that’s of a bunch of guys and girls that are at the
72
beach and they’re in bikinis and men are in like trunks, board shorts and things, sometimes
73
not always, but sometimes things can get flagged because they think, oh, perhaps this
74
is inappropriate. It’s using AI. It’s using automation to try and work out whether images
75
are inappropriate. That’s an example where it’s not inappropriate, but it may also flag
76
it just quickly just to make sure because they’re being safe on something like that.
77
Quite often I use Luma and Hapier because they are more lenient with that kind of thing.
78
I’m not saying you could do anything untoward, derogatory or adult in any way using any of
79
these models that will get flagged and stopped. But I use it for when runway is blocking me
80
when it shouldn’t really, because I can report it that it shouldn’t be flagged, but it’s
81
not going to fix it straight away. It’ll take a while for someone to check it, look at it
82
and then maybe add an A to the model. So that’s what I use this and Luma that we previously
83
looked at. Also Pika that we’re going to look at shortly. I use those as my backup models
84
and that’s pretty much what I use this for, because you’ll see iterations. If I did this
85
one earlier, then here, let me bring this out for you. It’s a good model walking through
86
a man walking through New York City. This was a text to video, but there is a little
87
bit of blur morphing going on here. Not quite as clean as perhaps runway, but still quite
88
a nice image. So that has generated in here. Let’s have a little play. I’ll bring this
89
up larger for you. And okay, I’ve said camera moves forward and it does ever so slightly,
90
but not very much. The propellers are okay. Not great. To be fair to it. Let’s do one
91
more generation with a different image. Let’s do that image of me again this time, much
92
like you saw in other ways. I don’t want to give it any prompts. I just want to see what
93
the AI model does by itself. I’ve already got the sizing right here, so it’s fixed to
94
my image. That’s fine. Let’s go with this and create wash away for that to generate.
95
Let me show you some other things. So if you wanted to, this was four seconds you wanted
96
to extend this, I think because of the plan I’m on, not on a pro plan. The frustrating
97
thing is that there isn’t the option inside here. If I, even if I open this up, there’s
98
not an option inside here to be able to extend this in any way. You have to do it kind of
99
manually, which is annoying. I can vary the prompt inside here and change it if I wanted
100
to. I can also just regenerate with the exact same prompt or the way I’m going to have to
101
do this, I think is to download right here. So while that’s generating, let’s go into,
102
you can see the creation mode right here for different things. Or if I went back to home,
103
I could go here and go extend duration, upload my video. I won’t give it any prompts. Actually,
104
I just want to see, actually, no, because then it might just extend it not doing much
105
at all. I’m going to say zoom and I’m going to create. So these are both now generated.
106
I guess first, let’s do the extension. If you remember this one right here, it was this
107
long, and then I wanted to extend it with a zoom. Let’s see how well it’s done. If I
108
make this bigger for you, let’s go. It’s definitely extended the shot. So if I wanted
109
just more of that shot, it extends it that way. But it hasn’t added my zoom in here.
110
Now this one right here, I was giving the tool a another chance to do a good image and
111
it looks okay. Looks like that my cough, I mean, my head moves quite nicely. That looks
112
quite realistic is keeping my face quite well. Sometimes you’ll see some models a little
113
bit like when we looked at luma last time, some models just completely change your face.
114
This is pretty good. This having a lot of steam, it looks like suddenly my coffee is
115
exploding. That’s not so good. But this movement is so you just reiterate and reiterate and
116
perhaps regenerate that just with this. So that was happy. I mean, there’s more to this
117
tool, like I said, but for what we’re doing for this course and creating video, that’s
118
a nice overview of how to use this. Use this site, the page on the site that I’ve made
119
for you to go along and test it. I use it, like I said, as a backup to runway. So it’s
120
always good to have new models and these are continuously upgrading and getting better
121
and better. So by the time you play with this, it may be even better and do exactly what
122
you want it to do regarding the projects that you’re doing. So let’s move on to another model now.
— Pika [New 2025 Updates & Features] —
1
So I’m just putting this lecture in first,
2
it’s actually an update lecture about Pica.
3
So the next lecture is the original one
4
that I had about Pica, but there’s been
5
some, see all this down the bottom here,
6
there’s been quite a lot of updates with
7
Pica recently.
8
So as promised, I’m going to give you
9
this update lecture, especially about Pica frames and
10
some other things on here.
11
I cover a lot of this like Pica
12
additions and Pica effects on the next one,
13
and I’ll go through stuff like all the
14
different packages you can get, subscriptions and things.
15
But I want to show you Pica frames
16
and how people have been using that, and
17
to let you know that now there is
18
all the way up to 2.2, and
19
there’s all these different versions of Pica.
20
The higher the version, the more recent it
21
is, also the more it will cost you.
22
So depending on what package it has, you
23
get billed per generation.
24
I’ll go into that in the next lecture.
25
But if you’re thinking of using Pica, obviously
26
come over to here, pica.art, and then
27
you’ll be able to see what the current
28
subscription packages are.
29
I want to show you something here called
30
Pica frames.
31
If I hover over here, you’ll see that.
32
You see how that just transitioned from one
33
to the other, to the other, to the
34
next.
35
People are loving using Pica frames because you
36
can pretty much make a seamless morphing transition
37
between one frame to the other.
38
So they’re great for things like music videos,
39
adverts, or loads of other kinds of projects.
40
So to do that with you, let me
41
just quickly go onto Pixabay and get some
42
copyright-free images to use.
43
Let’s just click on a couple of these.
44
So let’s use this one right here.
45
Yep, download it.
46
And then let’s morph that into, let’s go
47
for this one, and then perhaps this.
48
And that’ll do for now.
49
So Pica’s really easy to use.
50
You’ll see more in the next lecture as
51
I do that.
52
Let me go into Pica frames right here.
53
And what happens is I can either choose
54
loop or more frames.
55
Don’t worry about these right now.
56
So if you want it to loop, so
57
if you’re making a social media video where
58
it keeps going and going like a short,
59
people use it for that.
60
But let’s just add the frame.
61
Okay, let me add that first shot right
62
here.
63
If I just click it, let’s add that
64
first woman in here.
65
Here I can choose the timing, how long,
66
once I’ve got my next frame in.
67
So now we’ve got our two frames.
68
I can say, yeah, morph that between there
69
and there in five seconds.
70
Okay.
71
Now I can describe my transitions.
72
I can say more from one to the
73
other, zoom in, blend, whatever it is.
74
But I can also add more frames.
75
Click right here.
76
So I can add even more frames.
77
Add the old woman at the end.
78
I still want it to be five seconds.
79
I could change that and make it shorter
80
if I want to.
81
But that’s fine.
82
It’s going to be like a 10 seconds
83
as five seconds to go from that shot
84
to there, one to that to there.
85
And then if I want to do a
86
whole new set, I would, it’s going to
87
finish on this exact image, right?
88
So if I want more of this woman,
89
then I could either add her image again,
90
and that’s a little hack way that’s going
91
to then be on her for five seconds.
92
I don’t need that.
93
Or the next one, I could start with
94
this woman here and do my next set
95
of people that I want to do.
96
So I’m going to show you what this
97
is doing.
98
Again, it’s got no text.
99
You can’t, once you’ve got more than one
100
frame, I can’t add text in here as
101
of right now.
102
You can if you just have one to
103
one, but let’s hit and let’s run with
104
that.
105
Once again, you’ve got to be on 2
106
.2, means you need a paid plan.
107
I’ll talk to you about that in the
108
next lecture.
109
Let me run this.
110
And now I’ve clicked for that to start.
111
I can see it right here happening in
112
the background.
113
Let’s close this.
114
By the way, here’s also your total 10
115
seconds, how long that’s lasting for because it’s
116
finishing on that last frame of that woman.
117
So it’s going to finish quite abrupt.
118
Again, if you wanted another one, just add
119
that woman once again.
120
Nice, that’s finished.
121
Let’s open that up and let’s have a
122
little play.
123
Okay, we’re morphing from that one woman into
124
our next image exactly there.
125
And now it’s going to morph into the
126
old woman.
127
So it didn’t morph so much that last
128
one, it kind of just blended in.
129
You could run it again and you could
130
retry, but that first one to here, that’s
131
a really cool transition, isn’t it?
132
You’d have to be pretty good, some kind
133
of graphic emotions specialist to be able to
134
do that.
135
The watermark is only because I’m on a
136
cheaper plan.
137
You can get rid of that if you
138
had a plan for this, but that’s really
139
cool.
140
So that’s a big, big addition to the
141
Pica suite that people wanted to know about
142
and use.
143
It’s something I’ve used definitely on music videos
144
and things, but there are loads more.
145
If I go to Pica effects, I’ll show
146
you this in the next lecture, so I
147
won’t do it here.
148
There’s lots of things like you can make
149
things inflate from your images, make yourself a
150
lot younger, there’s squash it.
151
There’s more being added all the time.
152
So you can go on here and just
153
play with these.
154
I can’t go through all of them.
155
Turn yourself into a superhero.
156
Yeah, pretty cool.
157
Make yourself into a warrior.
158
Nice, really good.
159
Okay, iPod.
160
Oh, yeah.
161
All right.
162
You probably saw on one of my intro
163
videos, I had stuff like when I melted
164
or blew up and things like that.
165
That’s on here.
166
Talk about it in the next one.
167
Pica scenes is where you can add multiple.
168
You’ve seen this on the Sora one a
169
little bit.
170
I can add multiple images, videos, and I
171
can say, hey, I’d love it to do
172
this X, Y, and Z between one and
173
the other.
174
A bit like Pica frames, but you have
175
control.
176
More for stories.
177
Pica frames, like I just showed you, is
178
more for things that can be nonlinear, just
179
all about an effect, like a social media
180
video or music video or something.
181
Pica additions is describe what you want and
182
add it to your video.
183
You can have a video of me there,
184
and I say I want lots of birds
185
in the background or something like that.
186
You can upload it here.
187
The same with Pica swaps, where you can
188
just replace even a character in a video
189
with a completely new object or character, which
190
are really cool, but these are not really
191
the tools for making …
192
I don’t know if you’re going to use
193
these if you want to make a short
194
film like we’re doing in this project.
195
They’re probably more Pica.
196
I like to have them as a secondary
197
source because you can just, of course, you’ll
198
see me in the next lecture, just use
199
these just to make a video.
200
I can just upload the image that I
201
want and create a video from it.
202
That’s no problem.
203
I could have this, the older woman in
204
there, and I could say, hey, I want
205
this to zoom in so I want her
206
to smile.
207
I want her to move around, just as
208
we do in Runway or any other tool,
209
but it has some really cool things here
210
for other effects, social media vids and stuff
211
like that.
212
I use Runway predominantly for my standard shots,
213
but if I do want this effect, then
214
Pica is, I think, market leading in merging
215
and morphing these shots together, morphing in a
216
good way here.
217
I want to give you this quick update
218
about Pica.
219
These get added to all the time.
220
I’ll probably update this lecture again when more
221
happens, but the next lecture runs into Pica
222
a little bit more and you’ll see me
223
use this to be able to just animate
224
some ordinary images into videos and I can
225
assess the morphing or how good it is
226
and things like that.
227
If you want to make this kind of
228
stuff, Pica is definitely the platform for you.
— Pika AI Video: A Comprehensive Overview —
1
Another extremely popular tool and up there in
2
some of the best generations I think is
3
Pika, PikaLabs or Pika.art. So if you
4
go back over to the site AIvideo.school
5
AI-video-tools once again if you scroll
6
down you’ll see Pika, use the drop-down
7
and I’ve got a link right here you
8
can go to it or at the bottom
9
right there and there’s also some details which
10
we follow along with prompt crafting and also
11
how to use it if you want a
12
step-by-step guide in text but I’m
13
going to show you right here.
14
So you come to a page like this
15
it’s the explore page in which you can
16
explore obviously other generations that have been happening
17
on sites or you can go into your
18
own library right there and see stuff that
19
you have been generating but either way wherever
20
you are at the very bottom of the
21
page right here is the prompt bar.
22
So this is a very simple site to
23
use I’m going to go through how to
24
do that now you can upload your own
25
image right here or I could be just
26
doing text to video we want to use
27
this of course.
28
Pika effect is really fun if I click
29
this and show you you can do things
30
like put on your own image and say
31
I want the eyes to pop out I
32
want it to explode I want to decapitate
33
the head I want it to melt so
34
you can get some really cool effects with
35
stuff there’s almost inbuilt or is inbuilt inside
36
Pika to ensure you get that for your
37
iteration.
38
Here right here is the different versions of
39
this you may need to use 1.0
40
for lip-syncing I have a video later
41
a whole video looking at different models and
42
Pika is one of them for lip-syncing
43
if you need to use that you may
44
need to change your model for that and
45
then on advanced prompts at the end this
46
way you put your negative prompt if I
47
don’t want something in shot for example or
48
I can also change my aspect ratio here
49
to make sure it fits for YouTube video
50
or a normal video film perhaps Facebook one
51
one squared reels and stuff for tik-tok
52
Instagram whatever it is that you want the
53
different aspect ratios are here so let’s play
54
with some of these oh also you just
55
heard that if you hover over any of
56
these instantly starts playing including sound which can
57
be a little bit annoying but we will
58
click image I’m gonna get that image of
59
me again let’s just make sure these are
60
side-by-side comparing these tools one at
61
a time as we go through let’s do
62
a few things here I’m going to describe
63
this so there’s me right there let’s do
64
zoom in to man’s face I’m even gonna
65
give you something extra which is a little
66
bit more man waves now if I go
67
back on to the prompting right here you
68
can see that basically you need to be
69
very simple and straightforward with your prompt these
70
are more geared towards text to video as
71
opposed to using your own image when you
72
have your own image it’s much simpler to
73
do but you can add camera like pan
74
zoom rotate combine movements also there are some
75
other things I never use here but you
76
could have slash create to create a prompt
77
prompt and optional image references if you wanted
78
to I can also add layers of customization
79
with these I don’t use these they’re there
80
for you if you need to but I
81
don’t think you need them necessarily with Pika
82
and it’s pretty much more of the usual
83
it’s less conversational than other tools reviews like
84
luma and things but perhaps will where there
85
isn’t the controls for screen like we saw
86
in runway for changing the camera angle zooming
87
in and stuff it’s definitely more conversational in
88
that regard so let’s let’s just click right
89
here and we can generate that and it’s
90
happening right here I’m going to also whilst
91
that’s there I’m gonna take away this prompt
92
let’s play with some of these Pico effects
93
because these are really fun to do let’s
94
get decapitate my head on there let’s run
95
that one let’s also have me explode okay
96
let’s run that one first iteration done let’s
97
check this in I’ll make this bigger for
98
you I asked it to zoom in didn’t
99
I really nice except it’s showing my tooth
100
suddenly here actually I think I remember it
101
doing that before I could just quite simply
102
retry or I could reprompt so if I
103
just click reprompt then it comes down to
104
the bottom right here and I can reprompt
105
to try with that I did say wave
106
the man definitely doesn’t wave I could just
107
retry that if I wanted to as we
108
know from using runway or any model you
109
need to do more iterations of this so
110
I’ve now got some fun ones right here
111
we had the explode didn’t we and the
112
other well let’s let’s play this this one
113
is decapitate that’s so good that’s so funny
114
my neck looks a little bit fake here
115
that I’m just picking hairs here that’s funny
116
I’m so glad that happened okay let’s play
117
this one too I explode you see the
118
noise that comes with this Wow okay let’s
119
have a look at the explosion one whoa
120
nice really fun really really fun so Pico
121
is a lot of fun to do I
122
can actually probably spend all day long going
123
through all these different Pico styles that they’ve
124
got in here I really love these Pico
125
effects but it has everything that you need
126
it has everything at all it’s slightly less
127
control perhaps and runway I use this like
128
I said either if you need some of
129
these stunning effects because you want to add
130
these to you know it’s a lot more
131
likely go viral if you’re making things explode
132
or melt and stuff like that because they’re
133
visually very entertaining and great to look at
134
I use as a backup tool to runway
135
if I’m unable to like I mentioned before
136
unable to generate something in runway for whatever
137
reason then I have backup tools and definitely
138
Pico alongside Hapier and Luma they are definitely
139
some of my backup tools but some of
140
them are my main tools for some effects
141
let’s move on and check out another model
142
now
— PikaAddition (New Feature – Updated Lesson) —
1
So, whilst we’re here, I’m just going to
2
add in this lecture now.
3
This is an update lecture I’m making after
4
the course, because obviously, like I’ve said, I
5
keep updating this all the time.
6
So there’s been a change on Pika.
7
There’s been a feature added that I want
8
to show you right now, and it’s called
9
Pika Editions.
10
Now, I’ve been reading all about this and
11
seeing what they say.
12
I’m on the updates for most AI platforms,
13
and actually, the more I use Pika, the
14
more I like it.
15
I really love its flexibility and what it
16
allows you to do.
17
So I was excited to see about this
18
feature, I’d like to test it out.
19
So let’s test it out together, shall we?
20
So I’m going to click here, and we
21
can run through this together as if it’s
22
the first time doing it.
23
It says, let’s get started with Pika Editions.
24
So the first thing you do is you
25
upload a video.
26
Could be something you shot yourself.
27
It could be an AI video clip, of
28
course.
29
But we need to make sure, here’s some
30
examples, we need to make sure it’s at
31
least five seconds in length.
32
That’s fine.
33
So we’re going to have that.
34
Then we add an image.
35
So it could be, here’s an example, a
36
balloon, an octopus, a little rodent of some
37
kind.
38
And then basically, you give it a prompt.
39
Add this to my video.
40
Based on the current actions in the original
41
video, come up with a natural, engaging way
42
to fit this object into the video.
43
So we’ll try that, and then we’ll also
44
try some specific prompting for location for it.
45
So let’s do this, shall we?
46
So let’s upload.
47
Let me grab a video right here.
48
Okay, so I’ve got a little video of
49
this woman.
50
Looks like she’s at a festival, just kind
51
of smiling at camera, five seconds long.
52
And let’s add an image.
53
And to add that, I’ve got a picture
54
of a canary, a small little yellow bird.
55
So I’m going to give it this, actually.
56
Based on the current actions, let’s just fit
57
it to where you see it.
58
It would be good if it’s on her
59
shoulder.
60
Maybe they’ll make it fly by or something.
61
So let’s say, I’m actually going to play
62
you quickly.
63
So here’s the example video, just very simple,
64
just like this.
65
And here’s the image we’re using, okay?
66
This of a canary.
67
And we’re going to join these together.
68
Let’s see what Pika does.
69
Okay, that’s finished.
70
Let’s take a little look at this.
71
Okay, well, it has put it on her
72
shoulder for definite.
73
It’s exactly what I wanted it to do.
74
Let me get that full screen.
75
So it doesn’t look the most natural, like
76
the bird goes through her head at the
77
end, but it’s definitely done.
78
It’s the equivalent, you know, when we’re in
79
like mid-journey, any other image creation software,
80
and we are like using our editor, making
81
space and saying, put this here.
82
Or in Photoshop, you would have seen me
83
do it a lot.
84
Put this in this image.
85
It’s like doing that, but with video.
86
What it doesn’t do is this bird is
87
way too big, but that’s okay.
88
You could probably keep prompting that.
89
Let’s do this actually.
90
Let’s say, add this to my video.
91
Let’s say the small bird is sat on
92
her shoulder.
93
I said it’s a small bird on a
94
shoulder.
95
Let’s see.
96
I want to try a couple of more
97
of these actually while we’re here.
98
These are fun.
99
All right, and that’s finished the second one
100
just quickly.
101
It hasn’t made it smaller.
102
Definitely.
103
It’s huge.
104
Maybe it matters on the size of the
105
image you are adding, because that’s like a
106
full size and that’s smaller.
107
Let me test that one more time doing
108
that.
109
So I could just be back in the
110
editing software and say, hey, make this smaller,
111
or I could just bring it into Photoshop
112
and let’s just actually make that smaller like
113
that.
114
Okay.
115
And let me file and save.
116
Okay.
117
So I’m going to save.
118
Let’s change that one.
119
This one where it’s smaller right there.
120
Okay.
121
Let’s do that.
122
And let’s do exactly the same thing again.
123
So that has finished.
124
Let’s take a little look.
125
So it is a little bit small.
126
If I compare this one, look at the
127
size here compared to the size there.
128
Let me have a little look.
129
Not a lot smaller though, but a little
130
bit, but maybe that is pretty much the
131
size.
132
Imagine this is 69.
133
My bird was pretty much this size before
134
it was full screen almost like here.
135
So maybe it does matter or it does
136
take into account reference size for entire image
137
from what it’s taken on.
138
There’s something to explore and to play with.
139
Now whilst those were generating, I could have
140
done this inside Pica of course for video,
141
but I just created image of a pig
142
and a farm USA.
143
I’ve taken that farm and I’ve run it
144
through runway to get a moving image and
145
downloaded both of these, this pig and this
146
video of a farm.
147
So let’s add those and let’s try and
148
make a pig fly.
149
There’s the farm and there’s the pig.
150
Okay.
151
Add this to my video.
152
A pig flies.
153
All right.
154
Let’s take a little look at this.
155
Okay.
156
Yeah.
157
I mean, wow.
158
That’s actually really good.
159
Maybe they’re beginning a little bit screw here
160
when these arms are down there.
161
This bit, the pig, a little bit cartoony,
162
but so was the image that I gave
163
it.
164
You can just keep regenerating and retry this.
165
Wow.
166
This is a really cool, nice feature.
167
I like this.
168
I’m going to play with this a lot
169
more.
170
It’s great for you if you want something,
171
for example, you know, in runway before we
172
could have had our first image could be
173
this one and then I could generate another
174
image with a pig in it.
175
And I’ve got to say, hey, the pig
176
flies from left to right.
177
Well, now I could be using Pika and
178
I can just give it the image of
179
the pig as well as the video of
180
this and say, hey, this flies great.
181
And it’s really good.
182
So it’s a really nice update feature.
183
I like this.
184
So now you’ve got image, you’ve got Pika
185
effects, Pika scenes like we saw that were
186
fun.
187
And now Pika edition, Pika is growing and
188
growing.
189
It’s going to be, it’s in, it’s in
190
my top two, this and runway are now
191
my two top go to go to tools
192
for a video for sure.
193
And they’re getting better and better.
194
I think where Pika used to lack, they
195
used to have a little bit more morphing
196
and things when runway did is catching up
197
with other effects that it does, other tools
198
with inside is a video and the morphing
199
is getting less.
200
So they’re actually catching up with each other.
201
And they’re amazing.
202
Anyway, I hope you enjoyed this update.
203
Let’s continue and go on to another AI
204
video tool.
— InVideo: Overview —
1
So in video, in video, in video, this
2
is one heck of a tool, perhaps not
3
exactly what we want for this course, but
4
I’m going to explain it to you anyway.
5
If you go back over to AI video
6
dot school AI video tools, you can use
7
the drop down in the video.
8
I’m not going to spoil it for you
9
there and show you any of that because
10
this tool is something special, but a little
11
bit different from the other tools we’ve been
12
showing.
13
And it may fit your purpose perfectly.
14
It might not.
15
It really won’t help with us following along
16
with what we’ve been making so far, but
17
it may be what you need.
18
OK, I’ll show you the tool right here.
19
You can go to AI in video dot
20
IO.
21
You can upgrade if you need to.
22
There is a lot of free.
23
You can do 10 minutes a week for
24
free and you can just generate a video
25
right here.
26
It’s really simple to use.
27
I put in as much or as little
28
as I want detailed instructions to create a
29
video.
30
This is not the same as well, it
31
is the same, I guess, as text to
32
video, and that’s its main purpose.
33
It’s not trying to be in video is
34
not trying to be like the other AI
35
models to generate video from an image or
36
etc.
37
to be used piece by piece.
38
It’s trying to create you an entire video
39
using AI only, which is its USP.
40
It’s really great.
41
Now you’ve got some of the different tools
42
right here.
43
Workflows if you like.
44
If you click workflow, you can go on
45
here and basically what you can do here,
46
which is so automatic, is great is I
47
can say I want a short AI video,
48
a 15 second clip, ideal for an advert,
49
a 30 second clip script, turn my script
50
into an AI video screenplay, turn my screenplay
51
into an AI video or an AI montage
52
of stuff.
53
Perhaps I want an animated advert, for example,
54
which is obviously brilliant.
55
So let’s take, for example, let’s take a
56
15 second clip right here and you would
57
just fill out this.
58
It’s made it so easy for you.
59
So I want a 15 seconds long clip
60
for YouTube.
61
So it’s going to be that right about
62
explain how the USA government works.
63
Now that may be a little bit of
64
a stretch for 15 seconds.
65
We’ll see what it does.
66
Okay.
67
Use only generated media or you could use
68
stock media.
69
So sometimes it grabs stuff from different stock
70
sites and puts it in there.
71
If you don’t mind about that, if it’s
72
just for internal, you were just using it
73
for yourself, for example.
74
But let’s say only generated media.
75
I do want background music.
76
Okay.
77
Fast and energetic.
78
Sure.
79
Or I could change that myself to slow,
80
eerie or whatever I want.
81
I can then have the language.
82
The language is English.
83
Make sure it’s that.
84
I’m going to have my voice actor.
85
So use, let’s say a female, any kind
86
of voice, American young kids voice.
87
That might be fun to do.
88
And then I can have a watermark text,
89
captions for this.
90
Yeah.
91
Let’s say I want captions.
92
Add any subtitle.
93
Let’s add bold subtitle with a popping effect.
94
What’s my style here?
95
Disney Pixar, or do I want it to
96
be like Minecraft, Lego, hand drawn?
97
Let’s keep this going.
98
Film noir, monochrome.
99
Let’s put this in Technicolor.
100
That’s quite fun.
101
Music preference.
102
Use the best audio available.
103
Use YouTube audio library only.
104
So then it’s copyright free on YouTube.
105
Use Storyblocks audio.
106
Let’s just use that one.
107
And watermark text.
108
Use watermark text.
109
You can have a watermark on it.
110
So for example, if I wanted AI video
111
school on this, I could do that.
112
So this is what I’m doing.
113
It’s as simple and as automated as that.
114
You obviously have a lot less control.
115
Let’s continue.
116
If I was, generate video.
117
If I was then using, for example, script
118
to video, I could just upload my script
119
like the scripts we’ve been making earlier on
120
in the course.
121
And I could upload that.
122
I could put it in here and I
123
can tell it all different details like that.
124
And it can generate me instantly a whole
125
AI video.
126
If you don’t want that much control, this
127
is the easy, way more simple, quick version.
128
Someone once called it the lazy person’s AI
129
video.
130
And I don’t think that’s meant in a
131
derogatory way.
132
It’s definitely the simpler way to go about
133
this.
134
It’s truly remarkable what it can do because
135
most people that are just casual users of
136
AI video, this tool is perfect for them.
137
That’s not what this course is and what
138
we’re doing, but I had to show you
139
in video because it’s also great.
140
Sometimes you can generate things like an ad,
141
for example, and it can give you massive
142
ideas like stuff you never thought of.
143
And then you could go away and make
144
your own version based on these ideas.
145
That’s just a workflow possibility.
146
So let’s use some of this and some
147
stock media and it should use some of
148
the two.
149
This would generate only your own generative.
150
There’s a cost for that.
151
So if you’re under the free plan, then
152
it’s not going to enable you to do
153
it.
154
You need to pay, which has a monthly
155
fee, obviously, stock media doesn’t, but you would
156
need rights to the stock media.
157
So let’s just see for the sake of
158
this example, exactly what this generates.
159
And this has generated for me already.
160
Obviously I just showed you I’m going to
161
have stock footage on this, so it’s not
162
going to be AI generated footage.
163
If you were to use that, this would
164
all be free to use.
165
Let’s zoom this in for you and let’s
166
have a little play of this.
167
Curious about the USA government?
168
Here’s a super quick guide.
169
We’ve got three branches.
170
Legislative makes the laws, executive enforces them, and
171
judicial interprets them.
172
Simple, right?
173
Stay curious.
174
So that was obviously not very impressive.
175
We had two shots in the whole thing.
176
I am only doing 15 seconds.
177
I am only using stock footage.
178
That wasn’t great.
179
Let’s do another example.
180
This time, let’s do a script to AI
181
video.
182
Let’s do a two minute video for YouTube
183
using exactly this script.
184
So if I was using something like ChatGPT,
185
I could grab my script.
186
I could generate that.
187
Generate a two minute video for YouTube about
188
how the USA government works.
189
Generate a script for AI.
190
Let’s just be as simple as that.
191
Okay, opening scene, visual American wearing flag, narrator.
192
Okay, let’s just grab this.
193
Let’s grab that.
194
Let’s grab this.
195
Judicial, yeah.
196
Closing scene, end screen.
197
Let’s just copy this, okay?
198
Let’s go in here and let’s paste that
199
in.
200
Add relevant information about the video.
201
I don’t need to.
202
You can start following these.
203
Again, let’s use stock media.
204
If I use generated media, that’s a paid
205
for plan.
206
You guys can play with that.
207
You can imagine what it would be like.
208
Yeah, I’m going to use some stock media.
209
I’m going to add background music, fast energetic.
210
I’m going to be like scary music.
211
My language is English.
212
Captions, yeah, let’s put it on here to
213
have word by word.
214
Subtitle, that might be much.
215
Karaoke style subtitles.
216
Voice actors, I want this to be a
217
mayor.
218
Standard Midwestern accent for the narrator.
219
And I’m not going to give it any
220
other.
221
Let’s just generate that.
222
Okay, and let’s generate video.
223
All right, then I’m going to have my
224
audience.
225
No, let’s do students.
226
Actually, general public.
227
Look and feel, clean, minimalist, and dramatic platform
228
for YouTube continue.
229
All right, it has now made this.
230
You can see the different sections that were
231
based on my script that we uploaded.
232
Again, stock footage.
233
But let’s listen to the voiceover for this
234
and how this came out.
235
Welcome to our journey through the U.S.
236
government.
237
In the next few minutes, we’ll explore the
238
different branches that make up this intricate system.
239
So let’s dive in and understand how it
240
all works.
241
Ever wonder how the U.S. government works?
242
Let’s break it down in just two minutes.
243
The U.S. government is divided into three
244
branches, legislative, executive, and judicial.
245
Nice, that looks really good.
246
So you can see that obviously it’s using
247
Storyblocks, which is a place where you can
248
buy your, I mean, if you bought a
249
license for this, which I think currently $60
250
a month or something, you could use all
251
these images.
252
No problem, upload this.
253
Or you could have it generate its own.
254
But these are obviously real.
255
The benefits of using stock is that these
256
are real video that’s been taken by someone
257
and then sold on a stock site.
258
So these are always going to be perfect
259
as far as visually.
260
There’s no warping and stuff like that.
261
When you generate your own with AR on
262
the model, you may get some of those
263
flaws that we’ve seen.
264
But it’s doing everything I want.
265
It’s got a guy speaking in the accent
266
that we wanted.
267
It’s got the feeling.
268
It’s got these.
269
It’s got the captions on there in exactly
270
how I said, karaoke style.
271
And it sounds great.
272
Government actions are constitutional.
273
Lower courts.
274
So you’ve seen whole YouTube channels that have
275
just used something like this.
276
You say, hey, here’s the topic explained.
277
You could have a whole explainer YouTube channel
278
on any topic that you wanted and just
279
punch these out within video over and over
280
again, super, super quickly.
281
It’s really, really an amazing tool with regards
282
to that.
283
It’s not what we’re doing in this course
284
so much as like a video school, film
285
school of making AI video.
286
But if your purpose was to make a
287
channel like that, you could cut out a
288
lot of time and effort and use in
289
video, depending on what your purpose is, of
290
course.
— Stable Diffusion: Overview of this Powerhouse AI Video Tool —
1
Now, Stable Diffusion or Stable Video is another
2
tool that’s very popular.
3
You’ll remember this from when we were generating
4
images, but it’s also becoming very popular with
5
video as well.
6
If you go over to our page again,
7
aivideo.schools slash aivideotools, you can use the
8
dropdown right here for Stable Diffusion Video, and
9
you can get the link that comes over
10
to this page, stablevideo.com.
11
Now, there are free options right here, and
12
they’re also paid for, so I can let
13
you know that if I go into my
14
account, I can see I can purchase credits,
15
500 credits, $10, 3,000 at 50 right
16
now.
17
And how many credits it takes to generate
18
video, et cetera, I’ll show you.
19
So we have two options, start with image,
20
start with text.
21
This is obviously your text to video.
22
I can say, generate a video of a
23
man walking through New York City, but that’s
24
not a very good prompt.
25
We need more detail.
26
Aspect ratio, 16, 9, style, again, it’s a
27
little bit like runway.
28
Remember when we were using images for that?
29
It doesn’t show you what the style is,
30
it just says it.
31
That’s okay.
32
Let’s go with photographic, that’s like realism.
33
So a video of a man, aged 40,
34
wearing a suit, walking through a busy New
35
York City street, bright, sunny daylight.
36
All right, and this is 11 credits here
37
to generate this video from text, so you
38
can see you’re getting 40 for free as
39
the trial.
40
Go ahead and trial this, and that’s how
41
much it costs to generate that.
42
These are nice images, isn’t it?
43
These are nice images.
44
This one, this is nice also, very realistic.
45
All right, let me download that.
46
I’m going to upload that image, actually.
47
Let’s go from image.
48
Let me see here, camera motion.
49
This is where I can explain in a
50
conversational style the camera motions that add to
51
my scene.
52
Let’s do this.
53
Do I want it locked?
54
Yeah, that’s nice and steady.
55
Or shake, more documentary, realist style.
56
Do I want it to tilt up or
57
tilt down?
58
I don’t really want either, if I’m honest
59
with you.
60
Do I want it to orbit or pan?
61
That’s quite cool.
62
Orbit, let’s go around him a little bit,
63
shall we?
64
And then I can dolly in, dolly out,
65
up, down, depending on the image and what’s
66
available.
67
So that’s all that’s available right now.
68
There is more stuff coming soon, and I
69
can tell it what my camera motions are,
70
add more things in.
71
I can go advanced, and I can go
72
down and say use speed.
73
This is a random generation of a speed
74
point right here.
75
Steps and motion strength.
76
This encourages the AI model basically more or
77
less movement.
78
Let’s keep it right in the middle.
79
Let’s generate this for 10 credits.
80
And here is the video.
81
We’ve got some quite bad warping on the
82
face as he moves.
83
Not a great iteration.
84
If I’m being honest, it was very quick.
85
And I can just, I could either edit
86
this now, I can view the prompt if
87
I wanted to, and we can do this
88
again.
89
Use the prompt.
90
Or just to be fair to the model,
91
let’s do another image, shall we?
92
Let’s do one that we used in one
93
of the other videos.
94
So here is that drone shot, okay, that
95
we used before to see if it’s flying.
96
I can say I didn’t want it to
97
orbit, which this one didn’t do, did it?
98
I don’t want it to orbit, pan down.
99
I’m not telling it to do anything.
100
Let’s just see what it does with this
101
model and how realistic it is.
102
Generate for 10 credits.
103
And it’s finished generating.
104
Let’s take a look at this shot.
105
Actually, really nice.
106
That was a little bit better, I think,
107
than the Luma version that we had that
108
we did this for.
109
Also, it just moves across as the drone
110
is hovering.
111
It doesn’t look like the propellers are moving,
112
unfortunately.
113
But perhaps that would just be the way
114
sometimes, you know, they do look like that,
115
depending on the speed of your camera and
116
where you have it set to.
117
The motion is definitely nice.
118
A little bit of movement here.
119
And it is good, but it’s not got
120
too much controls as yet.
121
But these are updating all the time.
122
It’s nice that I could generate my image
123
inside and make that image into a video
124
all in one platform.
125
I think that I would use this definitely
126
as a backup.
127
We’ve got Runway and then Luma, Hapier, Pica
128
and Stable.
129
They would be my backup AI video tools
130
right now.
131
Perhaps Pica or Hapier, my main two backup,
132
closely followed by Stable.
133
But it’s updating all the time.
134
So as this gets more advanced, I will
135
absolutely keep updating you on it.
— Kaiber: AI Video Overview —
1
Next tool on the list of AI video tools is Kaiba. Once again, come over to the site aovideo.school
2
slash AI video tools. We scroll down to Kaiba. And then under here, you can just access it
3
as well as a step by step. Now Kaiba is a little bit different, and kind of positions
4
itself as an all in one platform for all tools. And it’s also very visually well laid
5
out. For creatives, you’ll notice it’s got a canvas layout, which is great. So I’m going
6
to just show you exactly how you do this step by step, I can either go to products, and
7
you can see everything about it right here. Or we just go to create. And the way it works
8
is if I zoom out slightly to give you more scope, is you get a canvas like this with
9
a hand drag and drop to pull yourself around this. So if I go to canvas, and then a plus
10
symbol up here flow menu. Now I’m going to say core flows, my flows if you’ve saved something
11
core flows. So now you can see it’s got luma video, runway video, cling standard video,
12
you’ll notice all these names cling pro video, all these names that we’ve been doing here,
13
minimax video, mochi video, flux, and then some image as well as video lab and some other
14
image tools video here. So what do you get on this is actually I’m going to make a project
15
right here on my canvas using one of these tools. So I could click runway, for example.
16
Now what it’s using is runway, the runway, the model that we’ve been playing with runway
17
to generate this. So I can add my subject, think of this left to right as your settings
18
that we would be setting up previously. So I can say prompt, man waving, I can then select
19
obviously my beginning and keyframe. Let’s go the image of me again, to be fair, I won’t
20
put an end keyframe, I want it as duration five or 10 seconds, I want it in 69. Although
21
my image is one one. So we’ll go to 69. Although it only gives me two layer options right here,
22
which is frustrating. Then I just click this little smiley fellow up here, click OK. And
23
what it’s doing is it’s generating it right here, this looks like it’s awkward, and you
24
can’t roll across, I’m trying to pan across. So what you do is you grab and you drag your
25
canvas, do I want this here? Do I want it down here? Do I want that there? Do I want
26
to say if I’m top down, do I want to say on the edge, I can make this generating my
27
image bigger if I hover to the edge like that. So I can have this exactly how I want to layout.
28
Maybe if I’m doing this project, I would have this and then I would have a man put his head
29
down. Let’s generate this. And this will be here. Perhaps I want like to have that one
30
there or perhaps I want to start putting things together and see how they go. You can lay
31
it out however you want, which is really nice. Here’s a preview of your canvas layout,
32
by the way, make it smaller. Should you be getting yourself more? Move that out of the
33
way. Should you be getting yourself more things? You know, anyway, let’s have a look right
34
now. Oh, that’s really nice. But this is a runway image, of course. So it is going to
35
be really nice. We know how nice runway is. And this one puts his head down, really down.
36
So I could start building up a story, for example, scene 1, scene 2, scene 3 inside
37
my own canvas right here, which I do really like as a creative. It’s a format that I really
38
like to play with. It’s really good. I can then also duplicate this if I want to for
39
my canvas. If I’m looking at my storyboard or if I want to recreate the same flow, we
40
can regenerate it and you can play with it. But we are missing some of the other tools.
41
Of course, we’re missing, for example, extending like you can get inside mid journey. No camera
42
controls. It would all be in prompt. So it is missing a few things. Let me go to another
43
canvas. I could start another canvas here. Let me show you this. I was playing with earlier.
44
This is the other one that I want to compare to just to make sure we’ve got a fair comparison
45
over everything. So this is the shot again. I was using runway model as opposed to one
46
of the others. This is how you can fly across London. Really nice. Really good. So I can
47
create a new canvas if I want to. And you could start using some of the other tools
48
later on. I’m going to update and add some other tools like minimax. That’s just getting
49
good. So I’ll probably add that. But I could add a prompt here. A man waving and we could
50
side by side compare how good this is. I’ve got a lot less controls here. And this charges
51
me at 50 credits. OK. And that’s pretty much with minimax video, the only control inside
52
of Kaiba that I have. All right. Let’s have a look. I don’t get to choose the aspect ratio
53
or anything or get to change that. OK. It is a great way if you wanted to. Of course,
54
I’ve got here all the different models that we could use. So you could get yourself a
55
small one month Kaiba. You could just go side by side testing the different video models.
56
And if you like one, you could go directly to them. They normally have more controls
57
inside their own model. But Kaiba allows you to have all of the different models, you
58
know, from my building things that when something doesn’t go right in the rare case that something
59
doesn’t generate inside runway and I’m struggling with it, then I jump to another tool and have
60
a backup where you could have your backups all in your own main tool here, which is a
61
really nice thing. And that’s finished generating. So let’s have a look. I definitely wave quite
62
enthusiastically. I like it. Let’s have a look at that a little bit bigger. Let me
63
move this over for you. Let’s check this out. I do a little bit of a mouth. I didn’t tell
64
it to talk, but it is talking, giving me I always say that in our models, you kind of
65
get very prominent teeth in here. They’re OK. Those ones are not too bad, but I’m definitely
66
waving tiny bit of morphing in hand, but not too much. That just looks like movement. But
67
I hate it when they do this and you automatically get talking like now I need to put in a
68
dub on him or something. So KBAR is that all in one tool. If you want to test them
69
out, it’s a very interesting concept. And you’re going to get this more and more. We
70
saw it in images to you get platforms that is drawing and white label and almost other
71
platforms all in one. It allows you as a user to test out modal platforms, have backups
72
inside one. But you often, as a result, get slightly less features available to you if
73
you compared to if you were going directly to that tool. So it’s OK for you if you want
74
to test them out. Perhaps you’re new to this. You want to compare them side by side, although
75
most of them come with a free trial anyway to do that. But the best bit about it is this
76
canvas layout, at least for me being quite a visual person. I really do love that for projects.
— Flux Overview: AI Videos —
Another tool that’s gaining popularity is flux.
I want to show you this once again.
Come over to I videocall I hyphen video hyphen tools on the drop down down here you’ll see flux.
Here’s a direct link to it as well as all the other parts about getting started.
How to use it step by step based on what’s on screen in front of you, so you can follow along side
by side.
If I go to flux, I right here at the top, let me zoom out to make you able to see that easier.
Uh, at the top here you can go to flux AI Video Generator.
They’ve got all their tools under here, but we’re going to talk about video generation.
And the layout is really intuitive and really nice and simple.
You can’t get lost.
It’s a great tool for that.
So I’m going to have my start frame.
You can choose optional end frame or not if you this is quite nice.
If you don’t have a start frame for example, you want to start from an image and you haven’t used one
already inside, say Midjourney like we created.
You can do this and you can go.
A man aged 35 sat at a desk with his camera.
I can say the aspect ratio I want this to be is 16 nine.
Uh, optimize the prompts.
Quite nice.
They always do this.
It’s quite descriptive.
Flux I find.
Really quite descriptive.
Look at this.
A 35 year old man sits on a sleek, modern desk, his camera poised in front of him, capturing the
warm glow soft light ambient lighting.
As he leans forward, the camera subtly zooms in.
Wow.
People’s laptops are arranged so you can tell that this model and I say it around here, the about the
post that flux really likes a lot of description, but luckily they have an optimizer inside here using
AI to optimize.
So you can do this and I can generate my image.
And then from image we can generate a video.
Now we’re already going to have ours of course.
But I want to show you this model.
It’s a nice touch that they put this inside on the same display.
That’s nice.
So here’s my image right there.
And now right here.
I can populate it right there to do this.
And we might as well use it for this I guess.
So let’s put that in here.
Enter video.
Let’s just say camera zooms slowly.
Let’s see what they do.
If I optimize it again, the camera gently zooms in, revealing intricate details about landscape boats,
golden sunlight, lush greenery in the breeze.
Well, none of these relate to that, so it looks like it’s just optimizing, not based on the knowledge
it knows about the image, which isn’t nice.
I’m just going to put this okay, let’s upload that.
And here we are.
So to show you the test, a man enthusiastically waves at the camera and smiles.
This is obviously, as you can tell, this was optimized.
Uh, here’s.
Yeah, there it is.
There is the image of the one we’ve been testing on other platforms.
Once again, here’s the really prominent teeth they always give people.
That makes me look like I have quite a bad set of teeth.
The way resting and they move their my teeth move in that one.
So that’s not great.
Hands.
Quite nice.
An image and the movement of hands is relatively natural.
It’s about slightly warping, but not not much.
I would probably regenerate this again and get a different smile and look on that image.
But let’s have a look at our main canvas.
And that’s generating estimated time 4 to 7 minutes.
Not the fastest of generation tools in comparison to, say, runway or something.
While I wait for that that’s finished generating now it says processing.
Wait over five minutes.
This is quite a slow generation tool.
I can show you the credit system and how much it will cost you here for packages.
Of course, these might change over time.
Let’s go for a one time in case you just want to check it out.
Here’s 4000 credits for 9.99 10,000 for 20 bucks.
You get 40 credits for free.
When you and you check in.
You get daily updates on this.
Slightly cheaper.
Obviously if you go monthly on here and then that lets see the price for this.
We’re looking at 100 credits or so per generation.
Oh, it’s now finished.
The guy fully zooms in.
Let’s have a look at this.
Oh there’s no zoom at all.
A lot of morphing on the face.
You know, this reminds me of a Luma star or old Luma, the previous generation of Luma.
Not that realistic to the point for something like this.
Perhaps in a landscape image.
Drone shot, but for the purpose of video, I’m not sure at this current stage I would use it.
It might be good for what you need.
Pros are obviously an extremely simple interface to use here, and perhaps professional is going to
be slightly better for this, but the morphing is a little bit too much for me.
Have to show you this because it’s one of the fastest growing flux.
And of course, as always, they’re updating nonstop.
So as it updates, I will update you.
— Rendernet Overview: AI Video Simplified —
1
And the next tool, RenderNet, again, an early form, I think, quickly growing right here.
2
Come to the page, AIvideo.schools slash AI hyphen video hyphen tools. You can scroll
3
down and see RenderNet right here. It’s just quickly growing in the space, is what I was
4
trying to say. And if you come here, you can just click and we can come straight to RenderNet
5
here. Now, it’s slightly different from the other platforms I’ve been showing you, but
6
I think it’s going to grow fairly quickly. So let’s keep an eye on this. Let’s go create
7
for free. You can trial this. And it’s made primarily for if you want to do character
8
based generations. I explain this in here. Specialized in creating captivating AI generated
9
characters for videos, making it a perfect choice for character focused content like
10
training head videos or creative storytelling. If you have a scene in your video, remember
11
the old Alfred Hitchcock movies? What was it called? Alfred Hitchcock presents and the
12
Alfred Hitchcock hour when he would stand there and he would present. If you haven’t
13
a presenter and then talking between your story or something or someone explaining something,
14
whatever kind of videos that you’re making, RenderNet is a good one for that because you
15
can either use one of their pre-existing models here. I’ll zoom out slightly for you. One
16
of their pre-existing models, any of these people or like I’ve done is upload myself
17
on here so I can go at Dan. What are they doing? I’m just going to say sat at a desk
18
explaining how to create AI videos. And then it gives me suggestions, posing on the dock
19
and then modeling on a Paris runway fashion. I’ve done this. It’s very funny. I choose
20
portrait or landscape for this. I want landscape, generate an image or generate a video. In
21
fact, when I click this, I think you’ll see some of the funny generations I was playing
22
with previously. If I go generate, let’s have a look here. I think if I scroll down, you’re
23
going to see me in a dress somewhere. Yeah, here we are. So I completely didn’t read,
24
uploaded Dan and then completely didn’t read the prompt, just went with whatever they had
25
in there. And it was a woman in a red dress in front of a haunted house or something.
26
So here’s me. Gosh, I look like my sister and I can do all kinds of things like upscale.
27
She hasn’t got a stubble though. So then you can also use your character and make a video.
28
It’s what we’re doing right now. I’ll show you in a moment. Here’s some other ones I
29
was making before with prompts that they gave me. It was like underneath a starry night,
30
used my face, but gave me long hair. I did always wonder what I would look like with
31
long hair. I’m not too sure I should grow it. So the sky moves kind of unrealistically.
32
Look at it move behind there and my face changes sideways on, but that wasn’t that realistic.
33
Let me show you this because you can also, after you’ve made your video, you can click
34
narrator and you can then upload your video and you can write a script, which exactly
35
what it did. I think I said, welcome to the course. Yeah, welcome to the class. Let’s
36
have a look. Not the worst actual lip syncing that I’ve seen. Pretty good. But obviously
37
it’s only now as long as my script that I gave it, which was one second. So it does
38
have that in with it because it’s made for speaking heads, narrator, and we can upload,
39
we can upscale our images and all kinds of things. But we want to have a look at the
40
video to compare it to, uh, the other video models that we’ve been doing. So let’s play
41
with that. Oh, generating now. Here I am. Let’s have a look at me. Yeah. The much different
42
hairstyle definitely looks a bit like me there. I got a slightly different jawline, but that’s
43
okay. Uh, let’s play this. What is that I’ve got here? Uh, sometimes that happens when
44
there’s someone with stubble uploading on someone’s double, you’ll see this. I do it
45
when I face swap all the time. You might’ve seen it in the last section where we did images
46
from here. I can go to narrator so I can upload that. Strangely, there’s not a way to just
47
upload, uh, straight from here to there. So I’ve download it and then I go to narrator,
48
upload video, upload from my downloads. Here it is happening here. All right. So I have
49
this in here. We’ve got the option to face swap if I wanted to swap me out. So if you
50
have an existing video and you want to swap someone’s face over, this is the place to
51
do that. Pretty good tool for that. And then I can either upload an audio file of something
52
I’ve said before, or we can use someone, uh, someone’s voice in here already. So I can
53
go welcome to the course, the home of a video exclamation point. Uh, so I use Adam. Our distrust
54
is very expensive. Let’s try one today is worth two tomorrows. Yeah, sure. Let’s go
55
with that and generate. Oh, that’s generated. That’s pretty fast generation. Really fast.
56
Actually. All right. Let’s go to the course, the home of AI video. Oh, let’s have a look
57
at that. Let’s look at it. Close up for you. Welcome to the course, the home of AI video.
58
Obviously not flawless, but actually pretty good. If you had a more realistic voice, if
59
I was doing a voiceover recorded like this, then perhaps it would look even better. So
60
that’s rendered net. The obvious limitations are there aren’t things like, uh, extending
61
properly, changing camera angles and things. But if you were doing a narrator piece and
62
needed to have consistency between characters, this would definitely have consistency between
63
characters because you select your character. When you go video anyone, I select my character
64
you saw before or upload any other image you want. For example, the image of a drone that
65
we’ve been working on before, it can actually do non character based things. So I can say
66
a drone flying over the city, but that’s not where this model is kind of USP and concentration
67
is, but let’s play with it. Okay. It’s very quick generation. Considering we just sat
68
there through those other two models, caribou and cling waiting quite a while. That one
69
is very quick. So it can do norm, uh, other images that are not character based. Yeah.
70
Slight bit of skewing, like it looks more illustrated on here. It loses a bit of definition,
71
but not too bad. And just to let you know, here’s the number of counts you can have.
72
I could have four generations, iterations of it if I wanted to. And here I’ve just noticed
73
is the automatically generated prompts. They have here a beautiful model wearing a classy
74
red dress, intricate pattern posing natural sunlight in front of an abandoned mansion
75
covered in weathered vegetation. And there she is. Okay. That was that. Let’s move on to the next.
— Heygen Update: Auto-Translate and Voice Clone Your Videos —
1
And welcome to this lecture. This is actually a little update, extra lecture I’m adding
2
into the course later because I had some students ask me, hey, how do I translate a video into
3
another language and also it sound like me and not a robotic voice? Well, actually, I
4
know how to do that because I do it for my course video. So it’s really simple. This
5
won’t be a huge long lecture. I’m using Haygen. If you go to app dot Haygen dot com and you
6
can sign up for that. There are different packages on here. If I go on to here, I’m
7
on a team package. Let me show you these for coming over into subscriptions. You can see
8
I’m on the team package expires in 16 days. I will definitely renew this. It’s really,
9
really good. And there are packages where if you’re I’m uploading, I’m translating videos
10
longer than five minutes. So I need to be on a team package somewhere about 70 bucks
11
a month. There’s also one, I think, for 30 bucks a month if you’re doing five minute
12
or less videos. The reason it’s like 70 bucks a month is that for a team package where you’re
13
doing more than you’re allowed to people on there. I only need one for me, obviously.
14
But you is the minimum you are allowed. So 70 bucks still for the month. If you’re translating
15
a lot of videos is a massive saver compared to what getting it translated by someone in
16
the native language and recorded. And then you’d have to then change to the voice changer
17
and do something like an 11 labs to clone your voice. Much easier, much easier way
18
using Haygen. So let me show you how to do this. So there are loads of things you can
19
do in here. Like I can create an avatar and stuff. I talk about Haygen a little bit later.
20
But for this translation video, I can go create video. And then I want to translate a video.
21
And it says, OK, let’s drag your video into here. So I can just grab a video. If I grab
22
a video of me from this lecture right here, and I can say, OK, on a quick translate or
23
advanced, I can do things like auto detect the language. So I’m going to say, no, I want
24
it. It’s definitely English. And I’m from England at the bottom here, the UK, and then
25
my target language. And there’s so many languages, all of these, I can translate it to all of
26
these. Wow, crazy. There’s so, so many. OK, let’s say I want to translate it into let’s
27
do Korean. OK, let’s choose Korean. Great. That’s selected. And I don’t choose anything
28
else. So this is good. Allow dynamic duration. So obviously, speaking one language or the
29
other, it might take longer to say words in one language than it would in another language.
30
So I allow it to, they’re only going to be seconds in this, I’m sure, to be longer or
31
shorter in Korean given the sentences it’s translating and how long they take to say.
32
So I can say translate the audio only and don’t lip sync this. So as standard, it is
33
lip syncing me and it’s also taking my voice. It’s taking my voice and it is making sure
34
it sounds like me speaking Korean and it’s lip syncing and changing the length of the
35
video to be dynamic to make sure it looks more realistic. I can say there’s only one
36
speaker in here or we can auto detect. Leave it as that is fine. And you just click translate.
37
Now, I’ve done this already, so I can just quickly show you. So here is one of my lecture
38
videos. Let me just play that and get to a point right here. And let me just let you
39
listen to this. Okay. So how incredible is that? You can see the lips sometimes blur
40
a tiny bit, but what do you expect? But look how good that looks. It’s moving my lips based
41
on the video and it’s intelligently worked it out and it’s speaking Korean. So there’s
42
an incredible tool that you’re able to just upload your video to and you’re able to translate
43
yourself in your language, in your voice, to whatever language that you want and spit
44
them out. And it’s unlimited on those plans too, if you get the same team plan as I’ve
45
got. So you’re able to translate, maybe this is a fictional video of yours and you can
46
say there’s two people in scene, translate it and it will translate using the voices
47
of those people, which is incredible. So I’m able to actually input any video I want, translate
48
it natively into the language and the actual language that I want it to be as and put this
49
out. For example, when I was uploading to film festivals, they said, Hey, this video
50
is great. Can you either add subtitles or this needs to be dubbed into say Korean, for
51
example, I’m actually able to now dub in the voices I had for those characters instantly.
52
Or if you’re doing course creation or YouTube videos, whatever, translate this into any
53
number of those languages you want to effortlessly in just minutes, they spit those videos out
54
and it’s all done. So that’s how you intelligently translate your videos using your own voice
55
and lip-syncing with Haygen. It’s a really amazing, amazing tool.
— Master Lipsyncing with Pika, Heygen, and More —
1
So, lip syncing, if you remember way back when I was doing the lecture on some limitations
2
earlier on in the course, lip syncing was definitely one of them and still is one inside
3
all AI models to an extent. We’re very, we’re not that close yet to realistic AI lip syncing
4
with realistic images. If you have an animated image, Pixar style animation, then of course
5
it’s far more forgiving and you can. So, what I want to show you here while we’re still
6
on video tools is I’m going to show you, I’ve got five tools up here, only two of them really
7
we’re going to compare side by side for lip syncing that I use, but I want to draw your
8
attention to a few others before I get to Pica, which we’ve already spoke about earlier
9
when we were doing image generation and then Hedra also, which is a new one for you. So,
10
I want to just go into a call, for example, is quite good. They’ve got face swap in, they’ve
11
got live swap, they’ve got talking avatar, and then there’s DID and Haygen. So, I’ll
12
just go through these one at a time, just a real quick brief overview of them before
13
I get onto the others and we can compare them side by side with the same image and same
14
text and see how well they stand up. So, a call is a cool tool, is great. Let me go
15
on to, they have other stuff, live face swap. So, if someone’s talking, you can change a
16
video which is very impressive and actually quite good as long as the face shape is very
17
similar, which we know from when we were creating images and we did face swap in that section,
18
in the last section of the course. Talking avatar, pretty much as close as we’re going
19
to get for what we want. I can choose the avatar that I want and then I can put in my
20
script or I can upload right here. So, if I choose an avatar, one of these or create
21
my own avatar, I can create with a description or directly upload, which I’ve always had
22
trouble doing, always tells me my limit is five megabytes, then four megabytes an image,
23
even when I upload one smaller, but that’s because I don’t use this so much. So, I’m
24
not on a paid plan. You would get different, obviously, allowances. I can choose one of
25
their models now. Let’s choose Diego. So, here he is. Here’s Diego. I can then put in
26
my script if I want and I can choose the audio that I need or I can upload audio. So, I can
27
upload that file. Do you remember we made it inside 11 Labs a few lectures ago, which
28
was that Freddy Confetti, that kind of funny one. I can play it for you. Yeah, that one.
29
I can do other things like change the background for images, the different colors if I wanted
30
to match branding or an image and I can generate here. These, of course, cost credits. I don’t
31
really use the tool, but I want to bring your awareness to it in the same way. Did I have
32
used several times? It’s way more for if I come up here, you can translate these. I could
33
translate my whole course into whatever language I want, but generate video and then you can
34
create your own avatar like I have here or choose one of the other ones. Happy, surprise,
35
serious, natural, if I want to. Movements, am I moving around a lot or am I natural?
36
And then I can choose my voiceover for whatever I want for this. Hey, nice to meet you. That’s
37
a cool accent. What do you think of my voice? I think your voice is cool. Really cool. And
38
all I could search a voice in there. If I know one, I can also import them. You see,
39
you have an upgraded plan under script on the left hand side is where you either put
40
in your script and choose it or upload my audio record live once again, like we’ve been
41
doing before. Upload that same 11 labs download that we did earlier with that funny voice
42
again and generate video. This costs one credit. I have zero credits. I use other tools, but
43
D.I.D. is really great for if you need to just make simple ads, marketing ads, stuff
44
like this, explain the videos really good. Hey, Jen, is very similar, actually markets
45
itself as making avatar videos, whether I want it portrait or landscape. Let’s say I
46
want to make this for social medias. Do I have a video? Do I have a photo? I’ll upload
47
my photo, upload it. That’s the one upload. Let’s name this. Leave everything else unspecified.
48
This is so you’re not using famous people or someone you’re not or have permission for
49
continue. Here’s Dan and here’s my panel for adding my voiceover so I can upload audio
50
and submit adding your avatar from over here and submit. Name this ABS and submit this.
51
Let’s wait. And it’s created this. Let’s take a little look. Hello and welcome to the course.
52
It’s so, so nice to have you here. I really appreciate you being here. OK, bye. OK, nice.
53
That gets me every time. Look at the lip sync on this. So actually, if you just want
54
facing the avatars, then this is a great tool. Watch this. Hello and welcome to the course.
55
It’s so, so nice to have you here. I really appreciate you being here. OK, bye. Slightly
56
less morphing in the mouth. Actually, a really nice tool. That was Hey, Jen. So if you need
57
that, that’s what that’s called for a backup, because I would use Hedra Pico inside runway.
58
In fact, I would avoid lip syncing altogether, but it’s unavoidable in some cases. So let’s
59
get on to the two that I really like to use for this. I’m going to upload that same image
60
of me. Now, what you need to do if you’re set to one point five down here, the version
61
of Pico needs to go to one point zero and you’ll have the option right here for lip
62
sync. So I can hit lip sync and I can either obviously do text, which is not as good that
63
we know, and then choose a voice from the drop down menu or we can upload like we’ve
64
done that same funny, ridiculous voiceover. Attach that. So unfortunately, this is limited
65
to three seconds. So you would need to do this in parts depending of your paying and
66
for what package that you have. Let’s generate that while we’re waiting for it to generate.
67
Let’s go over to Hedra and compare these side by side. Hedra is mainly made for lip syncing.
68
It’s one of their main tools. So from left to right, very easy. Here’s your audio. Here’s
69
your character. Here’s the output. Really nice and simple. I can either write it, do
70
text to speech and select a voice that I want. I can record straight out of my laptop or
71
upload, which we’re going to try for here. A really nice feature is that also you saw
72
on the previous screen there, you can select a clip from a longer piece of audio if you
73
wanted to. So I have my clip here, eight seconds long. I can also do some things too.
74
If I upgraded, like remove some background noise, change the voice on this, etc. I don’t
75
need to for this. Let’s upload that image of me. But you can also generate if I do text
76
to image here or from photos. Let’s do upload. Yes, this is my face. It knows that I can
77
choose this to be either 9, 16, 1, 1, 16, 9, but it’s quite close on me. Here’s the
78
way that I want it to be. Let’s zoom that out a tiny bit. Let’s actually do 16, 9. Zoom
79
that out like here. Place me. Okay, great. Let’s generate that and let’s go back over
80
to Pika and see how they’re getting on. It’s finished. It’s done. So let’s take a little
81
look at this. Okay, so I think that actually the lip movement is quite realistic. Maybe
82
that’s not even as good. Maybe I should go back to Haygen to use it because Haygen actually
83
has better lip syncing here, which looks nicer. That might be my go to. Pika is actually
84
pretty good. Again, you’re limited on time lengths. I could retrim this. I could re-sync
85
it if I don’t like it. Edit it, add four seconds, upscale this if I want to. Let’s go back over
86
to Haygen and see how they’re getting on. Finished already. Let’s have a little look.
87
Hello and welcome to the course. It’s so, so nice to have you here. I really appreciate
88
you being here. Okay, bye. The facial expressions, I think are the best inside Hedra. Hello and
89
welcome to the course. It’s so, so nice to have you here. I really appreciate you being
90
here. Okay, bye. You see even then at the end when the eyebrows move as I exclaim towards
91
the end, really nice, really realistic. Look, the whole face is moving, the jawline coming
92
down. That’s why I like it slightly better. But you know what? Haygen wasn’t too bad at
93
all, was it? Let me go back and play that again. If I make this 16 by 9, let’s put me
94
right here. Let me go to my script. I’m going to once again upload this. Welcome to the
95
new era of video creation with Haygen. Simply type. This time I’m going to use a voice that
96
they say because I want to do a normal voice rather than the comic one. And let’s submit
97
and have a little look at that. Let’s call this one AIVS2. Let’s submit. Okay, let’s
98
take a little look at this. Welcome to the new era of video creation with Haygen. Simply
99
type your script to get started. Wow. Okay, I’m really glad that I came back to Haygen.
100
Now I use it before many times. Perhaps they’ve updated and done something on the back end
101
because this is so much better than it previously was. I was using Pika, Hedra a little bit.
102
I’m going to use for face avatars like this. Definitely use Haygen, I think. To summarize,
103
I use Haygen, maybe Pika, but it’s quite limited in time. Hedra, Haygen for this talking to
104
camera stuff. This is brilliant. Really, really nice. Still 11 laps for my audio, I think.
105
And then I will use Runway inside with lip syncing for character stuff and animation.
106
I might as well keep it inside the same tool I’m using for video generation. So that’s
107
lip syncing, one of the limitations in AI, and that’s going to get better and better.
108
So I’ll keep adding to this lecture as and when they get better. If there’s another tool
109
out there, I need to go and make my task now. We’ve been going along the course going and
110
I’ve been making our projects. You can see me in real time, create all my shots and use
111
my video. It’ll be a long one. I cut out a lot of the rubbish and downtime, so it’s not
112
too long to watch, but you can see somebody make an AI video in real time to fill in any
113
gaps, knowledge missed, or any questions that you have might be answered through me doing that live.
— Course Project: Bringing It All Together – Ai Video Creation —
1
So, this is the course project you’ve been
2
following along.
3
You’ve seen me be making everything from idea
4
to script, and now on to creating audio,
5
and then we did images, and now finally
6
to video.
7
So this is a huge video right here.
8
It was over four hours long.
9
I pretty much recorded myself, picking up from
10
where we left off from creating images and
11
our storyboards and stuff.
12
I recorded myself in real time.
13
Now, I do not expect you to sit
14
here and watch four and a bit hours,
15
so I’ve sped this up three, 400 times,
16
so it’s maybe 50 minutes long, but I
17
still don’t suggest that you go ahead and
18
watch this whole thing.
19
What I want you to do is scroll
20
through this, because if you’re struggling with something,
21
you may see the answer here.
22
If you have a look on screen right
23
now, you could see my prompting that I’m
24
using to get certain images over there in
25
Runways.
26
If you pause the screen and zoom in,
27
then you’ll be able to see my exact
28
prompts that I’m using for things like Runway,
29
or you’ll also see me inside Midjourney, correcting
30
some images, changing some images, changing some faces.
31
You’ll also see me on Photoshop using some
32
generative fill, also Hapier and some other tools
33
when things didn’t work out.
34
For some reason, Runway was blocking me when
35
I was trying to change the image the
36
girl was drawing.
37
It was blocking for some reason.
38
I’m not sure why.
39
Some safety feature.
40
She was trying to draw a sketch image
41
of her family on paper.
42
Not sure why.
43
We went over to one of my backup
44
tools, Hapier, and I think Pica also.
45
By all means, this is going to be
46
a silent video.
47
Just scroll through with your finger, cursor, go
48
through.
49
If you want to see something, a prompt
50
that was happening, like how I got the
51
guy to walking, the backup shot, and all
52
the different variations that I did for this
53
image, or any of the others, you can
54
zoom in and see those at your own
55
time.
56
And then we’ll go on to the next
57
section of the course after this.
58
There’s a task, and then we’ll go on
59
to doing our sound effects for this video.
60
And gradually now, we’re almost putting the whole
61
thing together.
62
So enjoy this.
63
Do not watch the whole thing.
64
Just enjoy this.
65
Skip through it, and see any answers to
66
questions that you might have.
— Task: Experiment with AI Video Tools —
1
So, we’re at the end of the section and once again, it is that time for a task.
2
This time, obviously, it’s your turn to use any of the tools.
3
I obviously recommend the ones, and you’ve seen the ones that I use currently, and that
4
may change as more AI tools come into play, but maybe you saw something of all the different
5
tools that I showed you.
6
Maybe you saw something you like a little better, or having a better experience with
7
one than the other, or perhaps budget, time, anything else comes into play with this.
8
So, transform your images that we created in the last section into video to make your masterpiece.
9
Using AI tools of your choice, generate video clips from these still images to bring your vision to life.
10
Follow the steps below to make the most out of the tools and techniques we’ve learned.
11
So, choose your AI tool, then pick one that aligns with your creative goals and your budget.
12
Once you’ve selected that tool, use the image to video capabilities, obviously.
13
That’s the way we’re doing it for this course, to generate and create your clips.
14
Apply camera movement.
15
That’s either by, depending on your tool, using your prompts to tell what the camera
16
to do, to zoom in, to pan, to have no movement at all.
17
You saw me use that a lot.
18
And then also, any character movement or any action that happens inside of the image also.
19
And then experiment with tools.
20
If you’re using Runway or some other tools that have similar things, like the camera
21
settings where you get to actually choose the zoom, pan, tilt, motion, horizontal, vertical and stuff.
22
Or even if you’re doing a character talking to camera, to piece or to each other, perhaps
23
try using Act One inside Runway, like I’ve shown you.
24
And one of the biggest things you probably learn is if you extend your clips, especially
25
in Runway, but extend your clips with and without prompting, you’ll probably find that
26
you can create your whole next shot and seamlessly pan from one to the other, for example, or
27
you can tell it to extend and add explosion, add cuts.
28
I want to see a TV now.
29
I want to see a window, whatever it is.
30
You can probably do that inside the video model as opposed to making multiple images.
31
But you could, of course, use your in image and out image in lots of these tools to tell
32
the AI model where to start and where to finish and to animate in between that.
33
So go ahead, play with this, do your task and I’ll see you in the next section.
— ElevenLabs: Crafting Stunning Sound Effects —
1
Now, I want to show you using AI tools, 11 labs, and then in the next lecture, I’m going
2
to show you some non AI ways in case you need to get the sound effects in another way that
3
isn’t AI, just to give you both options. But we have our edit here. Obviously, I haven’t
4
completely edited it. The next stage after I’ve laid down all the video that I want,
5
before I go in and edit, and that’s do stuff like these titles you’re seeing, and perhaps
6
I’ll make some shots move slightly zoom in, I might color grade these to match and look
7
a little bit more that 1940s Technicolor and stuff. And the timings and things, what I
8
need to do is do my sound effects, I want my sound effects first on this part, because
9
they will dictate everything from my edits, my cuts, I might cut on a certain shot that
10
comes in like a sound, I mean, or also I need to do the voices that we were generating
11
for the little bit of audio that I have here with the two fathers speak. So we can do all
12
of that inside the main tool up to use for this, which is 11 labs. So the first thing
13
I’m going to do is generate some sound effects. And then we’ll go in and do some voice changer
14
and text to speech also for the two speech parts. So I’m just going to scroll through
15
and do a few different sound effects to start with. So the first thing I want to let me
16
just come right here and go for the drawing here, I might want a drawing sketching sound
17
effects. So let’s do that one. Drawing sketching on paper or sketching. Let’s generate that.
18
So to remind you over here on 11 labs 11 labs.io down here in sound effects, this is where
19
you can get this and just describe in it exactly what is you want. You can go to settings
20
and there’s more settings like automatically pick the best length where I can tell it I
21
want 15 seconds only really need five seconds. But I often generate 15 seconds because a
22
middle part or an end part or start might be slightly better. So when it comes to the
23
edit, I’ll just use that part there. So here are my four generations. Let’s have a listen.
24
Okay. We need two people join. So let’s do that. Let’s download this one. And let’s download
25
this one. Now what I do over my edit is I drag those in that I’ve just downloaded. Let’s
26
put them there. Let’s have a little listen to these. Yeah, okay, let’s grab like this
27
section right here. I’m going to grab that amount. Let me see when I want to start the
28
drawing. I’m going to have some sound effects right here of the drawing. Let’s put that
29
right there. Let’s have a listen. I’m just going to turn it up. I can see right here.
30
It’s down here. I want it to be more like there because I’m going to have some music
31
over there. So I want to just make sure that’s enough with Premiere Pro. I can hit G and
32
I can say turn that up 30. That’s quite loud. Let’s see. Wow, that’s really loud. Okay.
33
And I’m going to put that up 15. Keep it going until about here. Okay, let’s actually just
34
turn it down. Minus that by five, because I’m also going to have a girl humming. Young
35
girl humming. It’s humming to herself. See these generations? Nice. Okay. So the first
36
two seem to always be really good in Eleven Labs and then the others, not so much. Let’s
37
download those. I’ve got two because I’ve got two girls. I’ll just show you putting
38
in one place. Let’s go back and do that first one here. Okay. I’m going to have it fade
39
like here. I’ll have it fade out. Okay. So now it comes in like here, which is nice.
40
I’ve got this going over. So I could have, for example, I could have going all the way
41
from the start, which is a little bit eerie almost. So I could have it all the way from
42
here. Okay. So I’ve got a girl humming and I’ve got that. Remember there’s going to be
43
music over this, so it’s not going to be too prominent, too eerie. So the other thing
44
I’m going to go across, let me just pan through. I’ve got the same thing. I might have the
45
noise of a city dockyard. Let’s do a busy dock yard in the distance. Have a listen.
46
I’ll place this here and cut it as soon as this cuts out of there. Have a little listen.
47
Yeah. Just going to move that sketching down slightly. Okay. So I’ve got those noises.
48
I have the same for a Japanese city here, the humming and sketching. Then this is Amy.
49
There’s some more sketching noises happening here and here, and then footsteps walking
50
in. So I want footsteps on wooden floor. Okay. Let’s listen to that. Nice one.
51
Not. Maybe I need to say quick footsteps. Let’s try it again. We have quick footsteps.
52
It needs to match. The father walks in pretty sharpish on here. Where is he? He walks in.
53
Okay. Let’s have a listen.
54
Okay. I like this one. And this second one was pretty good. Download both of those.
55
In the edit, I’ll make that match his footsteps. And then I’m going to use the other one. So it
56
sounds slightly different for footsteps from here. Okay. Then I want both the fathers talking here.
57
I’ll come back to that in a moment. Stick with some sound effects. Okay. Back to sketching and
58
probably humming here. Sketching noise. Sketching noise. Now I want bombs dropping
59
and a siren. So let’s call it a war siren. I like that one. And let’s do explosions and planes
60
flying, men shouting, panic. Now it’d be something like that. Although I’m going to turn this down
61
slightly. G minus five and this one G minus 12. And we’ll have some explosion noises here. We go
62
back to that noise right there. And again, here with an explosion.
63
I’m going to put in the distance. Those noises sound like too much like I was in there. So
64
something you might want to always put in there is in the distance explosions, planes flying,
65
for my example. And then what I’m going to do is remember sooner we used to get in some
66
tracks I downloaded earlier. After we have this explosion, I might have, I’m going to turn this
67
right down. G minus 13 is I might have a song kicking right here and all this be with music
68
over the top, if you like, like that. Okay. Let’s go back to our sounds. I like both of these
69
noises. That one with a guy like almost shouting and that one with the explosion. So I’m downloading
70
them both and I’m going to use them both. So if I put this, where’s American Amy back here when
71
she first noticed is something’s wrong like here. So now I’m going to this other one with
72
explosions are in like that noise right here, something like this. So let’s blend these noises.
73
That one definitely comes into that one. I’m just going to make the gain come down on that one.
74
And then I want that atom bomb, huge explosion, huge explosion, caps, their huge explosion
75
in distance. Okay. I think this one’s actually the best one before I add this one. And then I’m
76
going to show you how it looks now with some sound effects in there. This huge explosion comes in
77
right there, doesn’t it? Okay. So it’d be like this. And then my music will come in and it’ll
78
just be completely deafening music. So I’m just going to show you, I’m not going to put in the
79
other sound effects for more stepping and things. I’ll do that off camera. You can then see how this
80
is going in the next stage of edit, but just to show you how it kind of brings it to life a little
81
bit. Let me play some of this for you and you can listen.
82
So we’ve got a sketching and then let me scroll across some more sketching here
83
and footsteps come in. I’ll match those up later
84
for both of them. And then some talking. We’ll do that in a moment then.
85
All right. And then we’ve got more and more explosion noises going back and forward from siren.
86
And then we’ll have something like, I’ve got some songs right here that I downloaded that we used
87
earlier. This song was 1940 style made on Sumo. Let’s have a listen.
88
Very 1940s. So what I want to do is if I just take a little snippet of that,
89
come to here, I’m going to put it on an audio track right where the explosion happens.
90
And let’s have it happen right as that’s happening. There’ll now be something like.
91
So I’ve got an effect like this, and then it might be that something like I had some,
92
also some just audio without any, some music track from Sumo as well, without any dialogue in it,
93
without any lyrics. So you see how we can just elevate that, just elevate the whole production
94
by just adding some sound effects in there, something underneath, not just music, but
95
some elements that sounds like it really brings to life what you might be missing in visual.
96
If something doesn’t quite add up, what are they doing? Well, you can tell the whole story
97
with a sound effect. So the only other thing I want to do now is I’m going to do some voices.
98
So I’ve got to have the father’s voice, and I’ve also got to have the father’s voice for the
99
American and the Japanese. You saw in the sound effects we did earlier that I generated voice
100
to voice and also text to voice for the Japanese. In fact, I have that here in the audio for the
101
Japanese dub. So I’m going to add in those now over the top, and then you’re going to see the
102
final bit of me doing my sound effects really before we move on to the next step. Okay, little
103
lady, come over here and say goodbye to your dad. Okay, little lady, come over here and say goodbye
104
to your dad. So now I’ve put in, and I’ve just left a little note, this will be mainly in the
105
edit, but just to remind myself of what this is, I have the audio in for my Japanese and my American.
106
For example, let me play a bit for you. I’m heading to work, you can watch me from the window, okay?
107
So now I have pretty much all my sounds I want in, except a little bit of music that I’m going to
108
keep playing with and adding, and that’ll probably be, even when I’ve edited everything, finalized,
109
up-res, music might be this tiny little thing that I tweak throughout. It’s probably the very last
110
touch. But just to show you, this was everything I do for making sounds, just to make it those
111
non-diegetic sounds, those sounds where they’re meant to be diegetic, whereas they’re there on
112
the scene, someone coloring, someone walking, but you put them in after to elevate it. And I
113
use 11 Labs for that. I’ll quickly show you a way that you could use this if you’re using
114
non-AI to do it next, just quickly so that you have both options.
— Stock and Free Sound Effects: Your Ultimate Guide —
1
Now if you didn’t want to use or you didn’t really need many sound effects and voice serving things
2
and you weren’t getting a subscription to 11 labs, there are some other ways you can get stock, if
3
you like the word, yeah, stock, stock sound effects and things like that.
4
Now your first bet is to go with something like, oh I said stick sound effects here, stock sound
5
effects and the best ones that come up, Artlist comes up, that’s sponsored and it comes up all the
6
time on YouTube, you probably get it and so is Envato Elements, I’ve used Motion Array, Pixabay has
7
some free versions of this.
8
If I go into here and go sound effects and I search for example a siren and then I can have a little
9
listen for some of these, just like the air raid siren I was using and I can download and they are
10
free and they are paid for, these are download siren royalty free sound effects, so Pixabay is
11
probably your best bet for that right now if you wanted to have a free one.
12
If you want to pay for then of course there are lots of different packages on things like Motion
13
Array is one I’ve used quite a lot but there’s obviously a monthly subscription package for that.
14
Now if you are going on to YouTube, if you are putting your footage onto YouTube only, there is
15
something called the YouTube Fair Use Policy where if you are adding to and using clips from other
16
YouTube videos that’s okay, you’ve probably seen lots of videos where they’re using small clips and
17
sounds from other, not sounds so much, not music but sounds or clips from other videos and it comes
18
under the YouTube Fair Use Policy.
19
Please look it up though, you have to add to it or for example if someone’s doing a movie review
20
channel and they use clips of bits of movie, as long as they are talking about it, add it to it,
21
that’s okay under the Fair Use Policy or if you’re just creating this for yourself, just for practice that’s fine.
22
I could go into for example YouTube here, I can go siren sound effect and then if there’s a sound
23
effect like this one, let’s have a listen.
24
Now I could take that and I could just search YouTube free download online, there are lots of it, I
25
quite like using save from net, you just take the URL right here and I pop it into there, hit
26
download, ignore the ads, you can click for this one, allow me to download it inside my browser.
27
I can choose for example, I could just choose I want an audio, any one of these, you could download
28
the video if you want to and then just use the audio for it, hit download, ignore the ads, close
29
those, it’ll get ready in a moment, download it and once again ignore the ad and it’s now in my downloads.
30
Once again familiarize yourself though with the YouTube Fair Use Policy, you can search here to see
31
if you’re okay to use that and in what regard and this is forever changing also, if you are just
32
using it for yourself then of course use that or like I mentioned probably best bet for you is if
33
you don’t want to use AI to generate it then you could use Pixabay but make sure you are using
34
royalty free sounds, that’s important.
35
Okay, sound effects done.
— AI Sound Effects with Kling —
1
Now Kling for sound effects. So may not be the first place you think of a sound effects but already
2
using this for image or video it’s a really good tool to also get sound effects from.
3
So if I just scroll down here I can see everything else here I’ve got AI tools and I can scroll down
4
and I can find sound generations or sound effects.
5
Let’s click on this. Now the great thing about this is also if I was already inside my I don’t know
6
creating text video or something right here on the left hand side.
7
This is where I’ve got sound effects generator. Now the good thing about this is you’ve got two
8
options here text audio basically prompting and describing and we should also use Deep Seek to help
9
with that or video to audio.
10
So if I go back to text audio and I prompt let’s say I want a noisy New York busy street people
11
walking talking traffic and sirens.
12
So let’s say I wanted that for my scene. Now I can choose how long I want this for 10 seconds
13
ideally for generate four credits to do this for four outputs on here.
14
Now don’t just use that let’s go Deep Seek and let’s let it create the most perfect prompt for us
15
Deep Seek knowing how this works.
16
So let’s see horns blaring pedestrian chatter subway rumble taxi engines revving crosswalk signals
17
beeping siren wails street performer saxophone melody.
18
Oh that’s quite nice. So it’s a little bit different from what I said. Let me generate both of these
19
generate that and then I’m also going to use the prompt here and just wait for that for a second and
20
click generate for that.
21
Now I’ve got a generating right here. Let’s wait for those and let’s have a little listen.
22
That doesn’t take very long to generate sound effects at all. Let’s play through some of these. Yeah
23
really nice. I think any sirens got a bit of siren in the background.
24
Yeah nice if I was using New York scene that’s the one that I would use for sure so far. Oh maybe
25
this one. That’s a really nice sound effect.
26
I would definitely use that and I can just click here to download these if I wanted to.
27
So let’s see the one that they had on here which was siren wails street performer saxophone melody.
28
Nice. That’s made the siren into a saxophone melody. Great.
29
That’s actually really nice. I can imagine this being like an opening title sequence for like you
30
open up in New York and that plays.
31
And the same for these. Yeah this one I think really nice.
32
Now another thing you can do here for sound effects is I can actually give it the video already and
33
that it automatically make a sound effect for me.
34
So if I grab for example this one we had which was a panda playing a guitar didn’t have any sounds
35
with it didn’t have any sound effects for it and I wanted to automatically make sounds for it.
36
That’s great. OK let’s go scroll down and make sure there’s nothing already. I don’t need to put
37
anything here. Let’s try this without it.
38
There’s also ASMR mode by the way which if you know what ASMR is where people get close to a
39
microphone move left right and stuff like that click it tap their nails stuff like that.
40
If that’s the videos you’re making but I expect most of you are not. So let’s upload this and let’s
41
just hit generate for four options for this.
42
I can see them going up working out right now right below the video. So it should sync with
43
movements and everything else.
44
Let’s see what it comes up with. That’s finished already. Super fast. Really nice.
45
Great. And I could be prompting there of course with stuff like country music maybe plays country
46
music or something like that.
47
But this is a really good tool because what you could be doing in your editing is you could be
48
editing multiple shots together to make your scene.
49
Perhaps we used I don’t know text or video and we made multiple scenes of a man walking down the
50
street enters into a shop sits down or whatever.
51
And you could upload that whole thing right here in video and it will intelligently add the sounds
52
as the man’s walking perhaps footsteps street noise and then he enters in coffee shop noise.
53
And you could be doing that all automatically inside here for your sound effects.
54
So it’s a great way just to get sound effects that you want text audio or usual video you already
55
have to get sound effects for it.
56
Really great tool for that.
— How We Created a Mini Movie with Only AI text to Video —
1
So in this lecture and the next is
2
an extra I’m added on to the course.
3
I’ve made a mini movie here this mini
4
movie Using only AI tools to create this
5
about 30 minutes long and also only using
6
text to video Which I know I’ve said
7
loads of times isn’t great because you can’t
8
get the best continuity, which is absolutely true
9
But I thought to myself what if you
10
didn’t need great continuity?
11
What if that wasn’t a thing that’s needed
12
is there such a project and I believe
13
yes There is and I’ve made it and
14
I’m going to show you about how I
15
did this about exactly how it looks So,
16
let me show you a touch of that
17
right here.
18
I’ll just play in the background right there
19
this movie right here if what if let
20
me turn it down what if You wanted
21
to create a movie and it’s this quality.
22
Look at what’s coming up on screen.
23
They look very realistic You didn’t need to
24
have the same person.
25
You’ll notice that my main character in here
26
doesn’t stay the same Throughout and that’s okay.
27
These are real shots.
28
I’ve added in and then these are my
29
it’s almost like a reconstruction shots So I’m
30
gonna add this in the next lecture.
31
You’ll see the full video You can go
32
and see the full video if you want
33
to it’s a biopic about the artist called
34
jelly roll I’ve created that so let me
35
just show you if I remove that I
36
made this page about it right here the
37
jelly roll movie and The there’s the trailer
38
for it here.
39
I’ve explained what I used and how I
40
make Only I’m calling them YouTube movies.
41
Okay, so this is around 30 minutes.
42
I’ll do it 30 minutes I create them
43
into short two-minute scenes.
44
I’ll show you that in the edit in
45
a moment The style is heavy voiceover telling
46
so if you had a voiceover telling the
47
story That’s the reason perhaps you don’t need
48
continuity so much almost like a documentary when
49
you see those reconstructed scenes So if I
50
had someone telling their story and then the
51
person changes in the background Short snippet shots
52
you either don’t notice or it doesn’t matter
53
because it doesn’t affect the story the story
54
is being told in voiceover I made these
55
entirely with AI tools.
56
I list them up here.
57
Honestly, it was 95% vo3 a tiny
58
bit.
59
I don’t actually think I use any of
60
my shots in the end with runway Maybe
61
a couple mid journey to create some images
62
I turned to video with vo3 11 labs
63
as my voiceover and sometimes I’ve got some
64
I use Pika to Pika frames to merge
65
some shots together for a really cool effect.
66
That’s what I use that for So I’ll
67
go into now, you know the idea of
68
this.
69
I’ll go into I guess the process of
70
how I made this Let me just bring
71
up the Premiere Pro.
72
This is this is the file right here
73
If I go back to all my bins
74
you see right here This is Jelly Roll
75
and they’re seen 2 3 4 5 6
76
7 8 9 10 11 And if I
77
scroll these out, these are all about this
78
was only 30 seconds.
79
It was the last scene scene to Two
80
minutes or so.
81
What I did was I broke my script
82
down into about 14 sections each about two
83
minutes long So the video came out at
84
28 29 minutes or so in the end
85
and I’ll show you my process for that
86
now So for this, of course, I started
87
just like I show in the course I
88
use chat GPT and I was asking for
89
a jelly roll script.
90
I was like first of all, okay Make
91
me a script about his life Let’s break
92
this down and then I went and did
93
my own research to make sure it was
94
it was good enough.
95
I Asked for because it’s for YouTube a
96
really tight hook in the opening and at
97
the end of every two minute scene a
98
really tight hook to Entering to the next
99
scene to keep people watching.
100
That’s a YouTube tactic because these are meant
101
for YouTube’s if I go to my script
102
here, I can see that it’s broken down
103
first I’ve got the structure like I spoke
104
about act 1 act 2 act 3 and
105
there’s many like Structure never we spoke about
106
that earlier in the course create a structure
107
then I created my script and yes I
108
don’t know a large percentage of this was
109
all chat GPT But then I also went
110
away and listened to myself Do several podcasts
111
and some of the scenes are actually stories
112
that he told in a podcast Things that
113
AI it could draw from podcasts, etc but
114
I don’t know it adds a little bit
115
more authenticity if you do your own research
116
if you’re making a narrative or a biopic
117
video like I’m doing here when it needs
118
a little bit more money needs to make
119
sure it’s a hundred percent authentic because It’s
120
telling about someone’s life So I have all
121
my scene right here.
122
This is maybe I don’t know 29 pages
123
or so if you look at sections, which
124
could be the next section.
125
So first I wrote my script I got
126
that down how I wanted the next step
127
was actually to do the voice first.
128
So here’s text I pasted into 11 labs.
129
Here’s the actual speech I was 13 the
130
first time they cuffed me just a kid
131
already deep enough in the streets that the
132
Law knew my name long pause in brackets.
133
I wish I could tell you this long
134
pause long pause This is what I pasted
135
into 11 labs now inside 11 labs I
136
pretty much did text a speech like this
137
and I pasted in each section one section
138
at a time So if I remove this
139
one section at a time if I going
140
back to my script right here And I
141
pasted this in like this boom and let
142
me just show you I paste this in
143
here change the voice I want whether you
144
want to use the artist a very similar
145
voice punch You know permission to use the
146
actual voice but a similar one or just
147
a narrator’s voice or perhaps it’s your own
148
voice This is actually my own voice right
149
here And you can see these become purple
150
right here telling the model that I want
151
a long pause I can also say things
152
like Dramatic and it will speak slightly dramatic
153
or slow fast, etc And I do each
154
one of my 14 scenes I get the
155
voiceover Then what I do is the first
156
thing I do in each one of my
157
I make a bin for each scene scene
158
12 Scene 13 scene 14 obviously had seen
159
1 2 3 4 5 6 7 8
160
9 and I put down Right here on
161
track if I move out and scroll out
162
this right here This is the audio I
163
laid it down made any gaps that I
164
want or closed any gaps that I wanted
165
after it Generated because this takes a few
166
times in 11 labs You might generate it
167
and it’s not right generate it again generate
168
it again What I found is 11 labs
169
learned perhaps the style that I wanted because
170
after the first maybe 5 6 scenes After
171
that the first generation seemed to be perfect
172
each time.
173
It was really good So you lay down
174
so I’ve got imagine I’ve got 14 scenes
175
in my beard open up my whole thing
176
I’ve got the voice down for everything Perfect.
177
Then what I did is I found music
178
you could of course create this like I’ve
179
shown you in sooner or something Make sure
180
you get a license for that or I
181
use a site called music bed That has
182
you can pay a monthly subscription and then
183
you have access to some really great music
184
I found a music track for each of
185
them because these are two minute scenes You
186
can pretty much have one music track per
187
scene and if they’ve got a really nice
188
opening section to it Let’s see if I
189
can find one on here So if I
190
play this, oh, this one actually doesn’t have
191
a opening scene like that.
192
Let’s scroll here So I’m playing this.
193
Yeah, the music’s playing and we come into
194
the scene.
195
I actually Generated see this one’s about Food
196
addiction he talks about chocolate bar and the
197
title of the scene is the unseen battle
198
Kind of at the start you’ll see in
199
the next lecture At the start of each
200
scene I have something in scene that says
201
the title of my mini chapter Inside the
202
scene like there’s a scene outside the prison
203
and the prison sign says the revolving door.
204
That’s the sign That’s the name of the
205
chapter that I want and I put it
206
into the sign So I’ve got some music
207
coming in So now after that phase you’ve
208
got like all 14 or however many scenes
209
you’ve got you’ve got all the audio down
210
You’ve got the music down and now you
211
need to add your shots in to create
212
this don’t you?
213
So for that, like I said, I primarily
214
used flow Now my character has some very
215
specifics like he’s about 300 pounds at most
216
of it overweight He has a face tattoo
217
rose across sometimes generated to I had to
218
regenerate and I’ve got a specific prompt for
219
that I show you in Far more detail
220
in the course, but I say the man
221
is age 35 chubby It sometimes didn’t like
222
it when I said the word fat So
223
the chubby very overweight 300 pounds mid-length
224
straggly hair Some if I didn’t put this
225
in it sounds a bit cool I say
226
chubby very overweight 300 pounds if I just
227
said overweight sometimes or 300 pounds It would
228
bring him looking I could probably show you
229
like not that heavy or sometimes I don’t
230
know what generated someone or Asian descent But
231
sometimes they weren’t they weren’t the size that
232
I was looking for So I found this
233
was specific if you want the way someone
234
looks as opposed to their Size jelly roll
235
was lost so much weight now that comes
236
at the end of my movie like his
237
addiction to food So I had to keep
238
changing this 300 pounds to 50 200 pounds
239
etc as the movie went on So I
240
put this on then I had to say
241
he has a face tattoo in brackets one
242
cross on his cheekbone rose on His forehead
243
that black and white color and sometimes it
244
made two crosses for some reason did that
245
quite a lot actually and then I Changed
246
it to say like one cross or a
247
cross Anyway, I got this down so every
248
single time I can pretty much just copy
249
What the man is wearing and looks like
250
and then put it into a different scene
251
each time now No, the man does not
252
look the same in each shot if I
253
go back and get this up You’ll see
254
his childhoods like he’s a different man there
255
to he is there slightly to there But
256
when you make the shots really dark like
257
this and there’s a cross on his cheek
258
and things It really doesn’t matter so much
259
and the voiceover is telling the story so
260
it didn’t really matter It’s primarily a voiceover
261
Almost like a documentary about a movie a
262
voiceover all the way through with each two
263
-minute scene has a small little 10-second
264
actually acted out Scene for example if I
265
if I play this there’s a voiceover talking
266
about it.
267
He’s saying the boys wanted me to Rap
268
in jail and stuff like that, and then
269
he actually raps This is actually using vo3
270
and say hey the man raps about XYZ
271
whatever he raps about and move it up
272
So it looks it sounds like that, but
273
you can see this all in the next
274
video You can watch it through so I
275
created it pretty much entirely with vo3 except
276
there was like Let me show you this
277
this what I use Pika for when I
278
wanted shots to merge like this from one
279
to the next I Grabbed images from mid
280
-journey put them into a Pika and they
281
merge as he’s telling the stories like you
282
want to know how this artist Came from
283
this to this doors open up going to
284
the home And then I enter into the
285
scene inside the kids home as a childhood
286
etc.
287
I use Pika for that It was really
288
really great really good for Something to generate
289
multiple But to merge those shots together if
290
you wanted almost like a mini Montage that
291
flows from one to the other So that’s
292
what I’ve created I’ve created a narrative movie
293
use a voiceover throughout it all and then
294
you didn’t care about continuity so much One
295
day of course I’m sure inside vo3 and
296
maybe by the time you watch this depending
297
on your since lecture You can use frames
298
the video in vo3 like I could say
299
frames And I could turn an image I’ve
300
got from mid-journey into a video the
301
trouble is What I found is that when
302
I just use text prompt a video They
303
were really realistic in movement except when I
304
use an image frame to a video It
305
was just slightly less realistic, so if I
306
just worked on my prompt.
307
I’m using lots of things like I use
308
dark gritty intense Dramatic and it gave me
309
these really dark shots Which are really cool
310
like especially in the prison and things and
311
it’s somewhat obscures faces Throughout it which is
312
fine because you don’t want people to notice
313
that Your character maybe doesn’t look the same
314
from one scene to the next like here’s
315
him in prison right here So it’s really
316
dark, and it doesn’t matter so much I
317
use this as a euphemism for mental health
318
this small demon that he carries around with
319
him See it’s not exactly the same man
320
here But it doesn’t it doesn’t matter and
321
go from this one to the next see
322
tattoo slightly change It’s not the same man,
323
but it’s got the same prompt, and it’s
324
very very similar So next video if you
325
want to watch it.
326
It’s a 28 29 minute video or something
327
I’ll put it in there, and you can
328
see what I’ve done and now I’ve broken
329
it down You can see I do so
330
the voiceover then the music then creating the
331
scenes in two minute sections which I think
332
helps for YouTube because YouTube need a hook
333
and then just short scenes to keep people
334
engaged with a hook at the End of
335
each one and it tells the life story
336
if you don’t know who Jelly Roll is
337
go and check him out He’s an artist
338
hip-hop artist turned country with a really
339
interesting life went to prison Yeah, I say
340
about his life went to prison and then
341
redeemed himself and is now on the best
342
-selling artists and did it through Struggle grind
343
going out there and making it work, so
344
he’s got a really interesting life story I’m
345
a big fan of his and appreciative of
346
all his work.
347
He really speaks to people a lot about
348
Mental health struggles and things like that that
349
people can really really relate to so I’m
350
thinking I’ll make some more of these YouTube
351
style movies if you like so this was
352
the first one I put it on here
353
You can go to either a IVs comm
354
forward slash jelly roll movie.
355
You can just touch you type in Jelly
356
roll movie comm and it goes through to
357
this page or a IVs And you can
358
see I also put on my youtube channel
359
You can see that I’m going to make
360
I’m potentially gonna make these you’ll see next
361
mr.
362
Beast movie Elon movie you can watch the
363
trailer or go through to YouTube right here
364
To see that so that was it that
365
was The mini movie I just want to
366
break down now you’ve been learning all this
367
yes If you want continuity then the way
368
I show you with things like mid journey
369
to runway or another video tool you can
370
use mid journey or any image creation tool
371
inside vo3 and probably the Frames the video
372
inside vo is as good as runway.
373
It’s just more realistic when I use text
374
to video So it was an interesting interesting
375
Test for myself to see what if continuity
376
didn’t matter because that’s the main thing people
377
think about my character doesn’t look the same
378
from Scene to scene what if there’s an
379
example where continuity doesn’t matter, and I think
380
I found one so go ahead next lecture
381
You can check out the whole video and
382
let me know what you think it is
383
unofficial of course I have to say and
384
I say in all my descriptions and things
385
I legally check that house I cannot make
386
a movie about someone.
387
I’ve done a permission to yeah as long
388
as it’s not Defaming as long as it’s
389
not negative as long as you don’t pose
390
it as official Which I’m not lots of
391
people make mini docs about people’s lives of
392
course this one’s just different because for one
393
of the first times I’m reconstructing it to
394
almost be like a fictional movie, but it
395
is like a documentary movie I guess a
396
docu movie something like that.
397
Okay.
398
Go and check that out, and I’ll see
399
you again on another lecture really really soon
— Watch the Mini AI Movie Here —
1
Some people are born into privilege, some are born into pain.
2
I was born into a battle.
3
To understand how I got here, how this madness became my life, you need to go back, way back.
4
Before the crowds, before the jail cells, and way back.
5
Back to the trailer park.
6
I grew up on the edge of Nashville, but it wasn’t the glitz and rhinestones they show you on TV.
7
It was Antioch, trailer parks, pawn shops, and prayers that never made it past the ceiling.
8
My mom was battling addiction, depression, like a lot of people where I’m from.
9
In fact, it was only music that got her out of her funk.
10
I’ll get to that later.
11
My pops was hustling to keep the lights on in a wholesale meat business, and of course
12
as a bookie on the side. And me?
13
I was learning how to survive with nothing but broken promises and bad examples.
14
Hey mom, I’m going out.
15
Do you need anything from the store?
16
Yeah, yeah, yeah. That was it.
17
That was what a conversation with my mom was like.
18
She said yeah sometimes, but with the drugs, I know she didn’t even know what she was saying yeah to.
19
Her being this way left me a lot of room to find extracurricular activities, shall we say.
20
I was 13 years old when I got arrested for the first time.
21
The first of over 40.
22
It’s easy to stop the story there.
23
A kid and a broken system.
24
One more lost soul spinning through the revolving door.
25
But if you really want to know how that kid became a Grammy nominated artist, how I found
26
music and how music found me, you got to understand what was going on at home.
27
When I was 10, just three years before that first arrest, I knew I wanted to make music.
28
I remember the exact moment.
29
My dad, Buddy, was deep in his drinking back then.
30
Funny enough, he became my best friend later in life, but that’s a story for another time.
31
Back then, I was wild, rebellious.
32
We couldn’t even have a conversation without it turning into a war.
33
My mom, Donna, she was battling her own demons, depression, addiction.
34
For almost 20 years, I don’t think I ever saw her out of a nightgown.
35
She lived in a room, curtains closed, world shut out.
36
Every once in a blue moon, when the clouds would break, she’d float into the kitchen.
37
All her girlfriends would come over, and suddenly that kitchen became a concert hall.
38
She’d play music, but not just play it, she’d tell stories about the songs, build them up
39
like they were legends.
40
Let me tell y’all about this Dolly song.
41
The song was originally written for Diana Ross, but of course, as we know, Dolly…
42
By the time she hit play, we’d be sitting there with goosebumps.
43
And when the song ended, we’d clap like we were front row at the Opry.
44
It was in those moments I saw music was more than sound.
45
It was medicine.
46
It lifted her out of the darkest places, lit up the whole — house.
47
I was 10 years old sitting there in that kitchen, and I knew right then I wanted to make music.
48
I wanted to save people like that.
49
But life had other plans first.
50
My parents divorced when I was 13, and it’s no coincidence that’s when everything went downhill.
51
I thought I had to take care of my mom.
52
We needed money, and I was going to get it any way I could.
53
The streets of Antioch were waiting, and oh boy, did they swallow me up.
54
Jason Dieford, for the crime of drug possession, drug dealing, shoplifting, it brings me no
55
pleasure to pass sentence of three years in Davidson County Department of Corrections.
56
May God help you, son.
57
There was good in my life, but it was drowned in all the bad.
58
I hate saying I was just a product of my environment.
59
It feels like an excuse, like it strips away free will.
60
But the truth is, at this stage, I was just a kid about to get arrested.
61
And within two years, still a kid, I commit the most heinous things I’ve ever done, something
62
I still can’t make peace with.
63
That little demon still visits me, furry, ugly, always there.
64
Six mornings, he’s the first thing I see, and I’ve got to talk him down just to breathe.
65
So I’m in prison.
66
I was 13 the first time they cuffed me, just a kid.
67
But already deep enough in the streets that the law knew my name.
68
Wish I could tell you prison scared me straight, that I learned my lesson.
69
But that’d be a lie.
70
The truth is, once you get in the system, it’s like a revolving door.
71
You get out and there’s nothing waiting for you but the same corners, the same people,
72
the same bad choices.
73
And pretty soon, you’re back inside.
74
In the words of George Young, I quote the movie Blow a Lot.
75
I’ve said it on podcast now.
76
I went in with bachelor marijuana, came out with a doctor of cocaine.
77
Honestly, and this isn’t said as an excuse, I couldn’t have reformed even if I wanted to.
78
Nobody teaches you how to get out.
79
They just teach you how to survive inside.
80
It wasn’t until I was 25 in adult prison that I even got my GED.
81
But before that happened, this haunts me.
82
I caught the worst charge at age 15.
83
Still a kid, but that’s no excuse.
84
I still can’t shake it from my soul. Where is it?
85
Tell me, where is it? Tell me now. Where is it? Tell me now. Where is it? Tell me now. Where is it?
86
Tell me now. Where is it? Tell me now. Where is it?
87
I used to think about that a lot.
88
What I’d say to that man if I saw him today.
89
How I’d tell him I wasn’t just stealing money, I was stealing his peace, his safety.
90
It’s not about me, of course, but it’s like a demon.
91
That little, small, hairy demon that I carry around.
92
That’s how I imagine him.
93
He’s always there with me.
94
Still to this day, I was ignoring back then.
95
But oh man, I see him today a lot.
96
A mental health demon.
97
It acts like that that happened.
98
That probably means I’ll never fully get rid of him.
99
But even after that terrible night and getting locked up again, I still didn’t change.
100
I was clean of drugs on occasion, but not clean of the evil I carried inside me.
101
Wherever it came from and however it started, I stayed bad, down to the core for a long time.
102
The only thing that ever made me change was still years away, and it was all because of
103
a woman, my number one woman to this day.
104
Jason, you’ve got a letter. Oh man.
105
When the big label said no, I put it on YouTube.
106
And you, the people, said yes, that’s how I got here.
107
If you’re digging this video, just know it ain’t made by a big company or with sponsors
108
telling us what to say.
109
It’s one guy, a laptop, and months of research, created with AI and a whole lot of heart.
110
All we ask is hit that subscribe button so YouTube knows you like it and so you don’t
111
miss the next one.
112
It’s going to be a good one. It’s free.
113
And it keeps us independent. All right.
114
Back to my story.
115
In and out, over and over.
116
Prison became my life.
117
I met someone on the outside.
118
I don’t know if we were good for each other or just drinking from the same poison which
119
kept us on the same path.
120
Her name was Felicia.
121
Prison became my second address.
122
And music, music was my anchor.
123
Because in there, time is all you got.
124
I wrote a song a day, every day.
125
Even when I didn’t feel like it.
126
Even when the words hurt to write.
127
Boxed, dreaming the streets I used to walk.
128
Now it’s just walls and talk.
129
Lost my freedom, found this beat.
130
This is how we survived the concrete.
131
Trapped in these walls, dreaming of sunshine.
132
Man, remember those days.
133
Now it’s just counting time. Oh, wait.
134
Jelly, you got some bars, son.
135
You could do something with that on the outside.
136
You know, if you dream big enough, you delusional dreamer.
137
Delusional dreamer.
138
I like that label.
139
Something I’ll come back to later. I wasn’t free.
140
Not on the outside, not in my head.
141
And I thought I’d die that way.
142
I would be, like so many others,
143
stuck in this system for the rest of my life.
144
Or until I was killed.
145
Or dead of a drug overdose.
146
Until one day, a letter showed up.
147
That letter changed everything. Prison time.
148
It’s a slow kind of death.
149
Same walls, same faces,
150
same voices echoing in the dark.
151
You start to wonder if the world outside
152
even remembers you.
153
Then one morning, this letter came.
154
The paper was worn,
155
like it had been handled a hundred times
156
before it reached me.
157
The words, man, they weren’t fancy,
158
but they were real. It said,
159
Now I had a reason. A real reason.
160
Not me in a selfish dream.
161
It was all for her.
162
Something flipped in me.
163
I’d missed her first breath,
164
but I wasn’t going to miss anything else.
165
I had to get out, stay out, and succeed.
166
And the only thing I knew how to do,
167
my only apprenticeship,
168
except drug dealing and robbing people, was music.
169
No one was going to hire or convince me
170
that I was going to make it.
171
No one was going to hire a convicted
172
violent felon. I had to make music work for her.
173
I picked up my pen that day and didn’t stop.
174
I wrote verses on scraps of toilet paper
175
on the back of food wrap and whatever I could
176
find. The other guys started asking
177
me to spit bars. We’d have little
178
shows in the day room.
179
Just a beat from somebody’s knuckles on the table
180
and me trying to turn pain into
181
poetry. Trapped in this concrete
182
maze, just trying to find my way.
183
Somebody save me from these darker days.
184
I’m just a long-haired son of a sinner
185
born to lose, but I’m still here
186
a winner. I thought I’d made it right
187
there, inside. Crazy, huh?
188
Someone said my bars
189
hit, and I was smiling all night
190
thinking that’s it. And in a way it was.
191
Tiny wins, tiny,
192
leading me to my goal.
193
Of course I had a long
194
way to go. But that’s the
195
thing. You’ve got to be a little
196
delusional to dream your way out of a cage.
197
Now I just had to get out,
198
resist the temptations of the outside
199
and stay a dreamer. Stay delusional.
200
Now this is where
201
the meat of the story
202
is, y’all. Really, it’s my act two.
203
I had freedom, and now I had to put my plan
204
into action. When I walked out,
205
the world felt too big. I didn’t have money.
206
I didn’t have a car that worked half the time.
207
But I had my rhymes. And I had this beat-up
208
laptop that barely turned on.
209
Most of all, I had my determination.
210
A crazy will and need to succeed
211
to turn all this mess around.
212
I still had my demons, of course.
213
I was far from fixed. That little
214
hairy guy made sure he was right there
215
with me every day. Poor mental
216
health like so many of us, but
217
I had no choice. I had to keep
218
going. I’d record in
219
my bedroom, blankets on the walls
220
to kill the echo, and I’d burn CDs
221
one by one. I’d sell them out of my trunk.
222
I’d stand outside gas stations
223
or in the parking lot of a
224
Walmart, just trying to get somebody to listen.
225
Heck, I even ended up performing at those
226
gas stations, anywhere anyone would listen.
227
I’m ready to break.
228
Studying about that all the way. Don’t think I’m proportionate.
229
Hustle pays off. I don’t care
230
who you are or what industry or
231
what your dream is.
232
Let this be the one
233
lesson you take away from my story. Hard work beats talent.
234
I fully believe that. I’m a fat guy
235
with a record and face tattoos.
236
I had obstacles in my way, but I had
237
a work ethic like no one else,
238
and no one, no one
239
could tell me I wasn’t gonna make it.
240
My first paid gig? 200 bucks.
241
I thought I was rich.
242
Someone was paying me to do this. Slowly,
243
bit by bit, month by month, I grew.
244
I was doing 300 small shows a year
245
at one point, living on greasy
246
diner food, Waffle House, and sleeping in the van.
247
And most of that money went straight into gas tanks
248
and broken down amps.
249
Still, the stage and the thought of my daughter,
250
that’s what really kept me alive.
251
Looking back on it now,
252
this next story I’m gonna tell y’all makes me
253
laugh so much. Like, at this point, I thought
254
I was playing the grand ole opry,
255
which I do later.
256
But right now, ha ha, listen to this.
257
I’ve been a professional
258
full-time card-carrying delusional dreamer.
259
I thought I made it years
260
before I actually did.
261
Every tiny win felt like a giant leap
262
toward the big time, at least in my head.
263
Looking back, I’m sure folks thought I was crazy
264
or just plain stupid.
265
But that blind faith, that’s what kept me going.
266
I’ll never forget this one festival I played
267
in the early days. Five stages.
268
I was the last one, fifth of five,
269
way in the middle of it.
270
I was the last one.
271
I was the last one, fifth of five,
272
way in the back playing at noon while people were still
273
parking their cars. I don’t know, maybe 40 people
274
stood there watching. But in my head, I was
275
playing the CMAs, which again, I do
276
do later, which is crazy. I didn’t care
277
if there were 4,000 or just four.
278
If you showed up, I gave you everything.
279
Thing is, the crowd wasn’t the only thing that was lacking.
280
Every time I thought I was getting somewhere,
281
the people in power, the ones who could
282
actually open the door, slammed it shut.
283
They all seemed to say the same thing.
284
But I’m not the only one who’s been told no.
285
Think about some of the greats throughout our time.
286
Michael Jordan got cut from his high school
287
team before becoming a legend.
288
Oprah Winfrey was fired as unfit for
289
television. Walt Disney
290
was let go for lacking imagination.
291
Elvis Presley told
292
to go back to driving a truck.
293
Rejection doesn’t mean the end. Sometimes
294
it’s just the start. Unless
295
I destroyed this whole thing myself,
296
which I almost did.
297
Look, I’ve listened to your stuff, and you’re
298
just not what we’re looking for right now.
299
You mean too fat? Face tattoos? My past?
300
Do you know what it’s like to face this?
301
Rejection every single
302
day because of who you are
303
or your past? I’d knock on
304
every industry door I could.
305
Most of them never even opened, and when they did,
306
it was the same lines.
307
Too fat, wrong voice, wrong look,
308
face tattoos, forget it.
309
They couldn’t see past the cover
310
to read the story. Lucky for me,
311
well, for future me and my family,
312
I was still a delusional dreamer.
313
These were just mountains to climb and hurdles
314
to overcome, nothing I hadn’t overcome
315
before. The trouble was,
316
at the same time, I was setting up my own
317
hurdles. I’m not ashamed
318
to tell you this.
319
Well, actually, that’s not true. I’m ashamed,
320
but I’ll be real with y’all. I was slipping.
321
There’s only so much rejection
322
you can take, and it was the only life
323
I knew. Cocaine,
324
codeine, it honestly
325
felt like the only way to quiet
326
my head, but the truth was
327
it was killing me slowly.
328
But things were slowing, aligning
329
into place. I could feel it. You know
330
when you can just feel that things
331
are so close to going your way?
332
Any of y’all seen that meme of the two
333
guys digging for gold and one gives up
334
so close and the other is behind but still
335
digging? I didn’t want to be the guy who
336
gave up. I attended AA. Hi, my name’s
337
Jason. I’m a musician
338
and I’m, well, I’m an addict.
339
I have been most of my life, but, you know,
340
I’m, like all of us, I’m trying.
341
I’m not sober still to this day.
342
I drink and smoke and I
343
for sure still have my demons.
344
That furry little guy is still with me.
345
More now than I care him to be.
346
Through all the bad, there was
347
hope. I had to get sober.
348
Not only did my band depend on me,
349
but there were three people that were about to
350
come into my life and change the course
351
of history. A woman, a man, and an
352
eight-year-old girl.
353
Everyone has their angel looking over them.
354
Call it God, call it fate, call it
355
whatever you want to, but these were mine.
356
I’m lucky enough to have a few. I have no
357
idea why I have so many angels in my life.
358
Perhaps there’s a bigger plan for me.
359
Sharing my message and giving people hope
360
is my greater purpose, perhaps, and these angels
361
helped that become a reality. Without them,
362
none of y’all would know the name
363
Chelly Rowe, and you wouldn’t
364
be watching this movie right now.
365
I had a friend, Noah Brother, Struggle Jennings.
366
We were cut from the same cloth, same
367
scars, same hunger.
368
He gave me a platform when nobody else would.
369
Particularly by collaborating on
370
the Waylon and Willie series.
371
We recorded together, hit the road,
372
and brought our music straight to the people.
373
He’ll say how I helped him, too,
374
how I was a support to him, too,
375
but I don’t think he’ll know how much he helped me
376
and anchor in this madness
377
through similar experiences
378
and, of course, music.
379
Listen, Chelly, forget the labels, man.
380
Forget it. You can do it all yourself.
381
Take it to the people online, YouTube, and the world
382
can’t ignore you. You’re right. This is the time,
383
man. Put it out there, get seen,
384
and the world can’t ignore us, man.
385
YouTube became our label.
386
We didn’t need an industry gatekeeper.
387
When the record labels won’t listen,
388
the people still can.
389
Struggle really drove that home for me.
390
It would be YouTube and the people,
391
you guys watching right now,
392
that would change my history forever.
393
You’ll see soon. But it wasn’t just this, just Struggle.
394
I met my now-wife, Bunny.
395
She took a chance on me when I was
396
beyond beautifully broken.
397
Remember my daughter’s mother I spoke about earlier?
398
Well, she was having her own struggles with addiction.
399
I don’t need to go into details here,
400
but I was an ex-convict, drug addict
401
without even a permanent home,
402
who needed to get custody of my daughter, desperately.
403
I don’t know why this angel helped me,
404
but when I needed somebody to save me, she was there.
405
Everything was coming together.
406
I just needed this break, this next push to make it.
407
And it didn’t come from a big label
408
or a producer. Like Struggle said,
409
it came straight from you, from the people.
410
Everything was about to change overnight.
411
I was just coming out of
412
the worst of my addiction.
413
Was still playing dive bars,
414
still grinding for scraps, some bigger venues,
415
but nothing to really write home about, if I’m honest.
416
Then one night I woke up in the middle of a dream
417
and there was this melody in my head.
418
This wasn’t uncommon.
419
I wrote so much and lived and breathed
420
music that I dreamt songs
421
and melodies all the time.
422
But this was different.
423
I recorded this song in two hours
424
and most of that was a discussion about
425
whether the Somebody Save Me lyrics
426
were the opening verse. Funnily enough,
427
that’s also something Eminem said he struggled with
428
years later when he sampled the track.
429
That still sounds like a crazy sentence to say to me.
430
I’m just Jason from Antioch
431
and Eminem is recording my song. You ready?
432
I’m not sure. You think they’ll like this?
433
I don’t know. It’s different, but we gotta try it.
434
Yeah, I need to get this out.
435
This song didn’t sound like anything
436
I’d done before. It sounded
437
honest. It sounded
438
like me. Many artists
439
have that breakout song or movie
440
that they didn’t know it was gonna be the
441
one to make it happen. Garth Brooks
442
had it with Friends in Low Places in 1990.
443
He thought it was just a fun bar song.
444
It became one of his signature tracks.
445
Actually, later I’d open a bar in Nashville
446
just down from his. What am I saying?
447
It still shocks me to this day to say these things.
448
Lil Nas X had it with Old Town Road.
449
That song was made for fun on a
450
$1,000 beat bought online.
451
It became a record-breaking billboard number one.
452
Sylvester Stallone wrote the script for Rocky
453
in three days, expecting it to be a small film.
454
It won Best Picture. Someone I’d also
455
meet acting on his TV show years later,
456
I still can’t believe this is my life.
457
This, unknown to me, was my
458
unexpected moment.
459
It was about to happen. No one would’ve believed it.
460
Let me set the scene
461
for you and tell me how you feel. I woke up
462
one morning in the usual. Grab your phone
463
and check it. Standard morning practice.
464
I had a weird number of messages, but
465
sometimes that happened. I thought nothing of it.
466
I didn’t even check them, but I could
467
see that some messages contained the word
468
YouTube on the preview of the notification.
469
So without reading them, I went straight to YouTube.
470
That song, the one I wasn’t
471
sure about that was totally different,
472
save me, was blowing up. That
473
feeling. It’s almost better than being told by
474
a label your album is platinum or whatever.
475
I did this myself. Put it out there
476
and you, the people, were responding.
477
But this song, it was so different from my usual
478
stuff. I’d been a rapper my whole life, but I
479
grew up in Nashville, the home of country music.
480
Mama raised me on those old country records
481
and you can hear that in my voice, even if
482
the beat was different. The people responded.
483
The song resonated with so many.
484
When I was speaking my truth, the people felt
485
it as a release. It was medicine.
486
We’d come full circle. All those
487
tracks that were medicine for Mama, I was
488
now the medicine for people. That dream was
489
coming true. The numbers didn’t lie. The views
490
were going up like crazy. People sharing,
491
commenting. The music world couldn’t
492
ignore this. When the people speak, the labels take
493
notice. Suddenly, every label in America
494
was calling. The same ones who said I was
495
too fat, that I had no chance, I wasn’t the right
496
fit. They were calling.
497
And this time, I didn’t have to beg for
498
a meeting. The work had actually paid off. See, I
499
told you, hard work beats talent. Be
500
real to yourself. This was it. It’s like
501
a weight off my shoulders. I could finally
502
pay the band what they deserved. I was clean
503
from the hard stuff. My daughter was safe
504
and for the first time in a long time,
505
I could breathe. What happened next,
506
the trajectory from this one thing, and
507
see, you never know where or when
508
this next thing is going to happen. You just have to
509
keep going. What happened next was
510
truly incredible.
511
What happened next, man, it
512
blew right past anything I ever
513
dared to dream. Within a year,
514
then two, then three, I’d gone from
515
a guy hustling just to keep the lights on
516
to standing under those bright lights with Grammy
517
nominations. I was hosting
518
late night shows, playing stadiums
519
so big you couldn’t even see the last row.
520
Thousands of people all singing
521
every word right back to me. Awards,
522
accolades, moments where
523
I stood there on stage thinking,
524
finally, but here’s the thing.
525
It wasn’t some selfish
526
climb that got me there. It wasn’t
527
chasing fame for fame’s sake.
528
It was sharing my pain in a song,
529
singing about feeling helpless,
530
about mental health, about needing
531
help, being medicine for the people,
532
just letting folks know you’re not
533
alone. Other people are hurting, too.
534
I’m hurting, too. That honesty,
535
that connection, that’s what made it
536
happen. That’s why we’re here. I truly
537
believe this win isn’t just mine,
538
it’s ours. I might be the vessel,
539
but I’m feeling what you feel, and we’re in
540
this together, and it was beautiful. Any of y’all
541
beautifully broke in Nashville? I’m not
542
going to lie to you. These past
543
few years have been the best of my life.
544
I’ve gone from sleeping in a van
545
and in jail cells to performing
546
in every corner of the country, and my
547
God, it feels incredible.
548
But here’s the thing. God
549
has a way of balancing you, of
550
keeping you humble, and no matter
551
how many stages I played, no matter how
552
much success came my way, there was
553
still a problem I couldn’t shake.
554
It’s a problem no trophy, no sold-out
555
crowd, no headline could cure.
556
In fact, it’s the very thing
557
that got me here in the first place.
558
The same struggle I still share
559
with so many of you. Up
560
until now, you’ve seen how hard work,
561
grit, and being a delusional
562
dreamer can take a man from rock bottom
563
to living his dream. If I can
564
do it, so can you. Just look at where
565
I came from. But remember this.
566
No man is an island, and for
567
all the battles I’ve fought and won,
568
there was still one war I was losing.
569
So let’s talk about that,
570
because if you think chasing a dream
571
is hard, what I’m about to tell you is even harder.
572
I’m not an addict anymore, but I’d be lying
573
if I told you I don’t still wrestle with my own mind.
574
For years, I ate my
575
feelings. I’d walk into a dressing room
576
and my first move wasn’t to check the set list.
577
It was to hunt for the candy bowl.
578
People think food is harmless,
579
but it’s an addiction like anything else,
580
and Lord knows, I’ve collected
581
my share of addictions in this life.
582
Now, food feels like the last
583
hurdle standing in my way.
584
It’s something I’m working on every day.
585
If you follow my YouTube channel, you’ve seen
586
the changes I’ve already made.
587
The transformation I’m chasing.
588
There’s one moment I’ll never forget.
589
We were out on the road
590
and found out we had access to a basketball court.
591
I was over 300 pounds
592
and struggling to move, to breathe,
593
to keep up, but I played
594
anyway. The next day, I came back
595
to try again, and my whole band
596
and crew were already there, waiting to
597
play. They’d heard about
598
me trying, about me wanting to get
599
better, and they made it a thing.
600
Now, every week on the road, we
601
have our own little crew tournament.
602
That’s not just music. That’s family. That’s love.
603
I told you I’ve got angels with me,
604
and I keep finding more.
605
Maybe if I can beat
606
this last addiction, God will give
607
me another mountain to climb.
608
He works like that, always keeping
609
you working, and that’s okay.
610
Because if I’m living proof that you
611
can overcome anything, and if
612
me getting through my struggles helps even
613
one other person get through theirs,
614
then it’s worth every fight.
615
And as for what’s next,
616
well, that’s the exciting part.
617
I’ve spent my life climbing
618
one mountain after another.
619
Some I chose, some I didn’t.
620
And every time I made it to the top,
621
I’d look out and see another peak waiting for me. Now,
622
I wake up in my own bed.
623
When I’m home, I take my kids to school,
624
come home to my wife, and still
625
get to play music for the people who built me.
626
I get to host shows, meet
627
my heroes, and be part of something bigger
628
than myself, but I’m not done.
629
There’s still more music to make, more stories to
630
tell, more people to reach, starting
631
with you. Because if I can go from a
632
jail cell to the stage, you can go
633
from where you are to where you’re
634
meant to be. And trust me, the best part of my
635
story hasn’t even been written yet.
636
music music
— The MrBeast Inspired “Movie” made with AI —
1
They say that I just woke up one day with a million subscribers, that I just gave money
2
away because I had it.
3
Okay Jimmy, are you ready?
4
But where I’m from, no one even believed being a YouTuber was possible.
5
And after a decade of obsessing, I went from boy in his bedroom filming with a phone, to
6
the most watched person on the planet, giving away everything.
7
To understand why I gave away everything, you have to go back, way back, to when I had nothing.
8
Little By Little.
9
I used to walk the road poor of five.
10
I’d get drunk and shoot out the fire. School.
11
Made me feel broken.
12
Like everyone else got the manual and I didn’t.
13
I used to just look out of the window and dream. I’m a dreamer.
14
An obsessive dreamer.
15
I’d imagine life as an anime. I loved Naruto. But it’s me. I’m Naruto.
16
But I play baseball.
17
Baseball was my passion, my obsession.
18
Little did I know at that time, but my obsession was about to be taken from me and I’d need
19
a new obsession to dream about.
20
One that would change the world forever.
21
Jimmy, pay attention.
22
Always daydreaming.
23
You’re a dreamer. Yeah. I am a dreamer.
24
Adults were always telling me that and I never understood why that was a bad thing.
25
Alright team, let’s get focused out there.
26
I was painfully shy and baseball was all I had.
27
But I had crones before I had answers.
28
Before anyone believed in me.
29
Crones takes over your life. You can’t eat. You’re tired.
30
You can’t do anything.
31
Wait, let’s hear it from the man himself.
32
Take it away, Mark. Meet Jimmy.
33
He looks like any regular young whippersnapper, but inside, his tummy’s in real trouble.
34
Croh’s disease is a mysterious malady of the intestines making digestion a downright doozy.
35
Cramps, fatigue, and frequent bathroom breaks, oh my.
36
Doctors say symptoms include abdominal pain and cramping, severe fatigue, weight loss,
37
and fever making life and the simplest of tasks difficult.
38
It’s a wonder how young Jimmy got anything done at all.
39
And the beast is yet to come.
40
I mean, the best is yet to come.
41
So now it was time for a new obsession.
42
That’s just who I am.
43
I’m either totally obsessed with something or I don’t care at all.
44
And my next obsession would result in changing the media landscape of the world forever.
45
The world just didn’t know it yet. But I did.
46
My mom was serving in the military.
47
That meant a lot of late nights and a lot of the time, it was just me and the internet. Mom!
48
It’s not like we were poor or anything.
49
We had food on the table.
50
Hey CJ, where’s mom? Working.
51
My mom just worked a lot.
52
We were just your typical small town American family.
53
Actually, the alone time paired with Crohn’s was the perfect storm.
54
It was probably the perfect setting for my obsession to take hold and made me who I am today.
55
With YouTube, I was instantly hooked.
56
It wasn’t just the videos, it was the psychology behind them that fascinated me.
57
Why did this video go viral and that one flop?
58
Why did this thumbnail get clicked but that one didn’t?
59
What made someone stop scrolling and what made them stay watching? I was obsessed.
60
I spent hours every single day analyzing videos.
61
Not just watching, dissecting.
62
I found this tiny group online, we still talk to this day, and one time I literally
63
woke up, jumped on a 16 hour Skype call, just breaking down YouTube videos, then passed out again.
64
If it takes 10,000 hours to master something, I was ready to give it 10,000 days.
65
I lived it, breathed it, dreamed it.
66
For over a decade.
67
YouTube wasn’t just something I did, it became who I was.
68
It got to the point where at school, someone would ask me.
69
So Jimmy, what’s your plans for next year after high school? College or job? YouTube. I’m…
70
I’m going to be a YouTuber. What? YouTube. Jimmy?
71
That’s not a job, that’s a website for cat videos.
72
You have to remember, back then, being a YouTuber wasn’t even a thing.
73
I still remember the first time I realized you could make money from this.
74
I thought, wait, what?
75
What’s the coolest thing ever?
76
But once I knew it could be a job, that was it.
77
I was either going to become the greatest YouTuber of all time, or be 80 years old with
78
1,000 subscribers, still trying.
79
At the time, my channel sucked.
80
I had maybe a couple hundred subs.
81
I posted some cringy gaming videos, joined sub for sub Facebook groups, I wasn’t taking
82
it seriously yet.
83
The name MrBeast, total accident.
84
It came from some random auto-assigned gaming tag, and just like that, MrBeast was born.
85
But things were getting desperate, time was running out, school was ending soon, and then what?
86
My mom would expect me to get a job or go to college.
87
I knew I had one shot, I had to go viral, and I had to do it fast.
88
I don’t think there’s ever just one reason we do anything in life, but if I’m being real,
89
the reason I started YouTube, it goes back to my mom, my dad, and losing everything.
90
I always said I wanted to retire my mom early.
91
I don’t really talk about my dad much, but let’s just say things weren’t great between them.
92
Then 2008 happened.
93
I think this is the most significant financial crisis in the post-war period.
94
Soaring gas prices, falling home prices, and rising unemployment.
95
In 2008, the crash happened.
96
The economy tanked.
97
And just like that, I saw my family lose everything.
98
I remember it so clearly.
99
I was maybe 9, 10 years old, watching our whole lives change by mom being literally bankrupt.
100
I remember her crying a lot, as strong as she was.
101
She had two boys to take care of.
102
My mom had to pick up a second job, and my dad wasn’t in the picture anymore.
103
She did it all alone.
104
And then one day, in one of my obsessive bin sessions, analyzing YouTube videos, I saw
105
a post of some YouTubers showing their earnings.
106
Someone had made $100,000 a year from YouTube.
107
From YouTube, I couldn’t believe it.
108
That was more than my mom ever made.
109
Right then and there, I knew, if I can crack this, I can retire her.
110
She’d never have to work again. That’s it.
111
That’s how I do it.
112
Maybe it was seeing us lose everything.
113
Maybe it was being sick all the time.
114
Or proving people wrong.
115
The kids who said I was wasting my time. The teachers. Even my mom.
116
Worrying I was obsessed. I was obsessed.
117
I tried to fit in.
118
I tried to watch South Park like all the kids in my school.
119
But I just couldn’t.
120
I felt like I was just wasting time from learning more about YouTube.
121
Everything was a distraction from my goal.
122
I became almost mute with obsession.
123
That’s how bad it got.
124
YouTube was all I could talk about.
125
No one else got it.
126
So I just didn’t talk.
127
I couldn’t switch it off.
128
So let’s use this obsession for good.
129
From that point on, I was done with Plan B. I was 18, almost done with high school.
130
My channel was struggling.
131
But I had one mission.
132
Make a video that goes viral.
133
Use the money to make another.
134
And repeat that until I became the biggest YouTuber in the world.
135
Okay, teenage me.
136
Let’s get a viral idea.
137
I’m only here and you’re watching this now through sheer determination and hard work
138
and passion for YouTube, devoting myself to this.
139
If you’re enjoying my story in this movie, just know it’s not made by any company or big team.
140
It’s one guy, a laptop, months of research, AI tools, and a whole lot of heart.
141
All we ask is that you subscribe to help us keep creating stories.
142
It’s free, takes one second, and helps keep us independent. Thanks.
143
Okay, back to the video. And here it is.
144
The ultimatum from mom that could have ended my YouTube dreams.
145
Listen, I want to talk to you about something.
146
Jimmy, take the headphones off.
147
Is it about the dishes?
148
I’ll do those in a bit.
149
It’s not the dishes. It’s this.
150
I know you love it.
151
I know you’ve been working hard, but you’re barely passing school right now.
152
I’m learning more doing this than any class.
153
Algorithms, thumbnails, retention.
154
But you’re not learning how to show up, how to finish what you start.
155
Life has structure, Jimmy.
156
You need something solid beneath your feet.
157
I need you to promise me something. One thing.
158
For next summer, you go to college.
159
Community college.
160
We don’t have money for anything else.
161
You get your grades up, you apply, and you go.
162
You can do YouTube, sure. On the side.
163
But you need to focus on school right now.
164
College isn’t going to teach me how to do this.
165
No class is going to tell me how to go viral.
166
This is my passion. Maybe not.
167
But it’ll teach you how to stick with something.
168
How to handle life when it’s not going your way.
169
If you don’t go to college, you need to get a job. Full time. And move out. Okay. What? What was that?
170
Did you say okay? Yeah. I promise.
171
I’ll go to college. Good.
172
That means getting your grades up now.
173
Starting tonight.
174
I’m not going to say that’s the first time I’ve ever lied to my mom.
175
But it certainly wasn’t the last time about my YouTube channel. I lied.
176
I wasn’t going to go to college.
177
I had eight months, maybe nine. That was it.
178
I had been studying YouTube like my life depended on it.
179
I had to make it work.
180
This was my shot. My deadline. No backup plan.
181
To force myself to be dedicated to this, I recorded and uploaded videos to my future self.
182
Scheduled to be released six months, one year, five years, and ten years from now about my success.
183
Okay. Hey, future me.
184
At the time I’m recording this video, I have 8,000 subscribers and 1.8 million views.
185
So you might be wondering, what the heck is this?
186
I just got a random spark of motivation to make a video and schedule upload it six months later.
187
Hopefully, in six months, I don’t still have 8K subs, schedule upload this video for six
188
months down the road.
189
And then after this, I’m going to schedule upload a video for a year and then five years
190
and then a decade.
191
So do you want to know what I did and how big I became over the next six months?
192
Let me show you.
193
So far, I’d uploaded whatever I was into at the time, breakdowns of YouTubers, how much
194
they made, sometimes a skit, sometimes something completely random.
195
Random was the key word.
196
Every time someone found out I had a YouTube channel, they’d ask,
197
Oh, you have a YouTube channel. Cool.
198
What kind of videos do you make?
199
Oh, just kind of random stuff, really.
200
No particular type of stuff, really.
201
And it was true. I had no niche.
202
I was just throwing spaghetti at the wall.
203
On March 1st, 2016, I had around 20,000 subscribers, years of work.
204
And I released this video trying to explain what the heck I was even doing.
205
And when people ask me what type of videos I make, I always respond with, you know, my
206
videos are just super random.
207
It’s really how I feel that day.
208
And then they respond with, bro, just tell me what type of videos do you make?
209
And then I’ll be like, it’s just random.
210
My channel is not known for anything.
211
I’m just really random.
212
One day I make a worst intros video.
213
The next day I blow up my laptop.
214
The next day I counted 10,000.
215
But from everything I’d studied, I knew that wasn’t good.
216
YouTube doesn’t like confusion.
217
Viewers don’t like confusion.
218
If someone clicked on a video, liked it, and the next one was about something totally different, they’d move on.
219
I need a consistency, a direction.
220
But direction means decisions.
221
And I had no idea what kind of creator I was supposed to be.
222
Just before this, on February 1st, 2016, I uploaded a video of me counting to 10,000.
223
Three straight hours.
224
And I started to notice something in the analytics of this video.
225
What is up, guys?
226
I tweeted out a tweet saying, if this gets 50 retweets, I’ll count to 10,000, 490, 4491,
227
4492, and 9,999, 10,000.
228
Oh, that’s a lot.
229
It did way better than everything else. Why?
230
Was it the length? The pain? The absurdity?
231
The desire to see if I’d actually do it?
232
Probably all of the above.
233
That’s when I realized something that changed everything.
234
No one wants to see ordinary stuff.
235
I have an analogy for this I still stick to, to this day.
236
If you’re driving down the street and see a cow in a field, you forget it.
237
You’ll probably never think about it again.
238
But if you drive down the street and you see a purple cow in a field, you stop.
239
You tell someone. You remember. That was it.
240
My videos had to be a purple cow every single time.
241
Something that makes you stop and take notice and want to see. Hey. Hey.
242
What are you doing? Nothing.
243
Just eating a Tootsie Pop.
244
I wonder how many licks it would take to get to the middle. That was it.
245
A question people actually wondered about. Funny. Ridiculous. Mildly painful. Totally human.
246
And most importantly, a purple cow.
247
That didn’t take long at all.
248
I’m actually there.
249
Now you know the answer.
250
Spread it with the rest of the world.
251
If you ever hear anyone ask, tell them to subscribe to me and then tell them that it’s 270 licks.
252
Something like that.
253
What am I saying?
254
Again, it worked.
255
The analytics showed insane retention.
256
People stayed to watch me suffer.
257
YouTube took notice and started pushing it to more people.
258
They want to share videos that keep people watching and staying on YouTube of course.
259
Make the best videos possible and you’ll be rewarded.
260
It was May 2016.
261
I had a few months until college and now, momentum.
262
I needed another idea.
263
Another purple cow. Fast.
264
I don’t know why I had a plastic knife in my pocket from lunch or why I started cutting that table. I was bored.
265
I was thinking about the next video.
266
Then I noticed it was slicing through.
267
What the heck are you doing? Wait.
268
Could I cut through the whole thing?
269
I did the maths quickly.
270
It’d take hours.
271
It might just work.
272
Is this another purple cow?
273
After school, I went straight to the store.
274
I grabbed a pack of plastic knives and the cheapest plastic fold table I could find and
275
headed back to film.
276
And I just started rolling.
277
This is a table. Cut right here.
278
Cut all the way through.
279
I can’t believe I did that.
280
What happened next?
281
I couldn’t believe.
282
I uploaded it on June 5th.
283
By June 12th, I’d hit 50,000 subscribers.
284
A week later, July 9th, 100,000.
285
It’s taken 460 videos since I started.
286
Years of obsession.
287
And now, I’d never felt more alive. It was working. Finally. Whoa.
288
That has 5 million views?
289
Saran wrap has 3?
290
Oh god, I know what I’m doing. I kept going.
291
In August, I released a video comparing 100 layers of saran wrap to toilet paper.
292
Of course I did.
293
I was making a little money now from ads, a few bucks here and there.
294
But I spent every cent back into the channel. New laptop. Better lights. Camera. Props.
295
Whatever I needed to make the next one better.
296
But it wasn’t enough. Not yet.
297
September came fast.
298
College was now.
299
Mom’s deadline had arrived.
300
College, or a job and move out.
301
But I wasn’t done yet.
302
I had one more move to play.
303
This can’t be the end of my channel.
304
I was so close to my dream.
305
100,000 subscribers.
306
I’d hit 6 figures. It felt huge.
307
But not huge enough.
308
The views were good.
309
The growth was good.
310
But I couldn’t live off of it. Not yet.
311
It wasn’t don’t go to college good.
312
It wasn’t move out and survive good.
313
And then August rolled around.
314
College enrollment time. Mom? Loving it. Me?
315
Hating every second.
316
You’re gonna love it, Jimmy.
317
This is the beginning of your real life. You’ll see.
318
You’ll thank me one day. Yeah, maybe.
319
I wanted to believe her.
320
Maybe I was being dramatic.
321
Maybe college would be fine.
322
Maybe I was just scared of growing up.
323
Maybe this was the path I was supposed to take.
324
Today we will be examining the socioeconomic impact of the Industrial Revolution on urban development. Yeah.
325
That lasted about two weeks.
326
I knew college wasn’t for me.
327
This wasn’t learning.
328
This was reading. From a book. I had books.
329
I could have stayed home and read the same thing.
330
So I went back to YouTube.
331
Harder than ever.
332
But this time, it was life or death.
333
See, I had a deadline.
334
In six months, my mom was gonna find out I was failing everything. Straight zeros.
335
And when she did, I’d be kicked out. Gone.
336
So I had six months.
337
Six months to make YouTube work.
338
Six months to make enough to move out.
339
Because when that door slammed shut, it had to not matter. Bye, Mom!
340
I’m off to college!
341
I started lying to my mom.
342
Not proud of it, but she didn’t understand.
343
I’d leave home like I was going to college.
344
Then sit in the parking lot editing for hours. Film at night.
345
Edit in the day. Sleep?
346
Nah, that was optional.
347
I went deep into the weirdest challenges and questions.
348
Can you microwave a microwave?
349
Yes, that was a purple cow idea. I microwaved.
350
A lot, actually.
351
I jumped on the fidget spinner trend.
352
Weird Amazon products. Counting. Again.
353
I was creating nonstop.
354
Remember earlier I said I had created videos to my future self?
355
One at six months?
356
Well, I had one at a year two that was released right in my first college semester.
357
In it, I hoped for 50,000 subs.
358
I hope you have at least a year.
359
What about 50k subs?
360
Ah, that seems kind of high. I hit over 100. 50k subs.
361
If you don’t, you’re a freaking failure in life. Holy crap. I beat it. I beat me.
362
This just added fuel to my fire.
363
Then, in January 2017, I counted to 100,000.
364
It took over 24 hours. It was torture.
365
I was a zombie after, and I remember not being able to concentrate at all.
366
But it was a sacrifice I was willing to make for the channel, and to see if it was successful.
367
And right around when that six month deadline hit, I made $20,000 from YouTube.
368
It was that moment, this time I knew I’d have to sit my mom down and explain.
369
Mom, I have something to tell you. Okay.
370
Now this sounds serious.
371
I haven’t been going to college.
372
And before you say anything I know, I know what this means.
373
You haven’t been going to college.
374
What the heck have you been doing for six months, Jimmy?
375
I’ve been working on my YouTube channel, and I have a little money now.
376
I’m going to move out and try this. I just have to.
377
I found a place to live.
378
Rent was only $750, and I split it with a friend.
379
I still had my old Durango to get around, and I only needed a few hundred bucks a month to survive.
380
Everything else, it went back into the channel. This was it.
381
This was my life now.
382
I wasn’t just making videos, I was making the future I’d dreamed of and obsessed about for so long.
383
All I cared about was making the best videos I could. Wait.
384
The best videos ever.
385
And just five months later, in June 2017, everything changed. One video. One idea.
386
I stepped out of the challenge genre, and into something else entirely.
387
Something I don’t think anyone had done before.
388
Something that would help people, and at the same time send my channel into the stratosphere.
389
After these years of grinding, I finally got my first real brand deal.
390
They offered me $5,000.
391
More money than I’d ever seen in my life. It felt huge.
392
I mean, it was, but I didn’t just hear $5,000.
393
I heard possibility.
394
I knew if I flipped it the right way, it could be so much bigger, starting with that number.
395
$10,000 had to be in the title, and the idea was simple, but I knew it would hit with my audience.
396
I literally walked around the block for an hour, wearing them down, explaining why it made sense.
397
10 just looked better.
398
It was round, bold, the kind of number that grabs you.
399
Look, if you make it $10,000, I’ll just go outside and give it to a homeless man.
400
I swear, it’ll go viral.
401
And finally, they said yes.
402
It wasn’t polished.
403
I hadn’t thought it through.
404
It was just me, giving cash to someone. But it worked.
405
It was raw, real, and it was the start of everything.
406
For years, my mom and I butted heads about money.
407
She always said, can’t you give $8,000 and keep $2,000?
408
And I get her point, but this was all I wanted to do and spend on.
409
When I made more, I spent more on making the best videos ever.
410
Eventually, she trusted me enough to know what I’m doing, and we have the best relationship now.
411
This is when the real growth came.
412
After that first $10,000 giveaway, everything shifted.
413
I’d done videos about creator earnings before, like, how much do YouTubers really make?
414
But now, I was giving away everything I made, using money as a topic for good.
415
Every penny from YouTube. Every dollar.
416
I either gave it away or put it straight back into making the next video, reinvesting 100%.
417
I believed that was what was going to set me apart. July 2017.
418
That was the first one, the homeless man.
419
October, December, more of the same.
420
I didn’t plan a schedule all that far in advance, it was spontaneous.
421
Every time I saw an opportunity, I’d go.
422
In March 2018, giving away $100,000 to a shelter felt surreal.
423
Those videos, they changed the channel.
424
They changed me.
425
It wasn’t just about money, it was about impact.
426
I came to believe in those purple cow ideas.
427
Something so different, so unexpected, you can’t ignore it.
428
And giving money away, large sums.
429
To strangers, that was my purple cow.
430
I remember staring at the analytics after tipping pizza delivery guys $10,000 in October 2017.
431
Seeing the watch time, seeing people sharing it.
432
Then I’d think, okay, maybe being generous is going to grow this channel in a real way.
433
That phase, giving away money, that’s probably when I fell in love with not just entertaining,
434
but doing good on the platform.
435
Usually, drama sells, in newspaper headlines since newspapers began.
436
And YouTube was no different.
437
But I was sure doing good can get clicks too.
438
I was taking risks, and people noticed.
439
And the channel grew.
440
Because good stories are powerful.
441
Because people want to believe.
442
And I wanted to give them something to believe in.
443
Things were growing faster than I’d planned.
444
I needed a team, and people.
445
Still some of my core partners today.
446
As the videos got bigger, I realized I couldn’t do it alone anymore.
447
I mean, my friends had been in the videos since day one, but I really needed a team.
448
The scale was growing too fast.
449
And honestly, I had no clue what I was doing with money, apart from to spend it all.
450
So I did the only thing that made sense.
451
I brought in more of my friends.
452
One of my first hires outside of friends in the videos was a guy from school.
453
He wasn’t an accountant, but he was good with numbers.
454
So I asked him to handle my books.
455
Later, my mom stepped in.
456
She’d watch over the accounts, make sure I didn’t blow everything on some insane idea.
457
I mean, I did anyway, but I’m sure she probably kept some back for safety.
458
She knew how obsessed I was.
459
She kept me grounded.
460
It wasn’t professional. Not even close.
461
But it was family.
462
And that’s what I needed.
463
At the same time, I knew the videos had to look and feel better.
464
I couldn’t just be the guy with a camera anymore.
465
That’s when I came up with one of the weirdest ideas I’ve ever had.
466
I literally went to a comedy club just to find someone funny to hang around with and bring on board.
467
My thought process was, if they can make people laugh in real life, maybe they’ll make my
468
videos funnier too.
469
Or make be funnier. Hey, I’m Jimmy.
470
Do you want a job? What?
471
That’s how I met Tarek.
472
Not through resumes.
473
Not through some formal interview. Just a hunch.
474
And he’s still with me today.
475
Piece by piece I built the crew.
476
Chandler, Chris, Carl.
477
And after videos, people stuck around and became part of the team.
478
That still happens today.
479
None of us knew what we were doing.
480
I remember standing in front of a whiteboard showing what our channel growth would be.
481
It must’ve looked like I was crazy back then.
482
But we believed in each other.
483
Or for some reason, they believed in me.
484
And that belief was enough.
485
We finally got a real office.
486
Well, calling it an office is generous.
487
It was just a couple of rooms. White walls.
488
Not much furniture. Bad insulation.
489
But to us, it was a studio.
490
It was headquarters.
491
It was the birthplace of it all.
492
To where we are today with huge studios and land.
493
Looking back, it was chaos.
494
But that chaos was magic.
495
I didn’t see limits.
496
I only saw the next idea.
497
The next crazy stunt.
498
And slowly, that bedroom channel turned into a huge operation.
499
That team turned into a company.
500
And that company turned into the studios you see today and the growth we’ve gained.
501
The growth, I thought that was huge.
502
And it was at this point.
503
But growing to the biggest channel in the world with more subscribers than the entire
504
population of the USA.
505
A mega brand with chocolate bars and toy merchandise.
506
It’s really like a movie, this dream.
507
Through perseverance to where we are today.
508
It was about to go to another level. Again.
509
And that was my early growth story.
510
From a kid in Greenville uploading videos no one watched to building something so much
511
bigger than I ever imagined.
512
And the truth is, I never stopped. I couldn’t.
513
Every upload was a lesson. Every mistake. Every win. I studied it.
514
Obsessed over it.
515
Broke it down until I knew exactly what to do next.
516
I tested everything. Niches. Themes. Challenges.
517
I remember when the One Versus series started back a few years ago and the survival challenges.
518
24 hours in the desert in March 2019.
519
Surviving 24 hours straight in a rainforest in September 2019.
520
Even spending 24 hours underwater in August 2018.
521
People thought I was crazy. Maybe I was.
522
But I wanted to see how far we could push YouTube.
523
How far we could push ourselves.
524
These weren’t just videos.
525
They were experiments.
526
Each one bigger, stranger, more impossible than the last.
527
And every time, we raised the bar.
528
I wanted to create the best video I could.
529
That’s what my aim is with every video, basically.
530
What would make this video better, more watchable, and be the best?
531
This often meant pushing the limits.
532
Over the years, people have even leaked my company blueprint.
533
Literally the notes of how I run this channel.
534
How I try to clone myself and staff.
535
People love to talk about that.
536
How I reinvested 100% of what I earned for years.
537
The idea was simple.
538
If I believed in the channel, I had to put it all back into the channel. No safety net. No backup plan.
539
That mindset grew into Beast Burger for a while.
540
And then into Feastables.
541
Into studios bigger than we could ever have imagined.
542
This wasn’t just YouTube anymore.
543
It was something else.
544
Something no one had a name for yet.
545
Sometimes I think back to that first $10,000 video.
546
Walking around the block, begging a brand to double their budget, just so I could give it away.
547
That moment was messy, unplanned, but it started everything.
548
And it led here. To all of this.
549
And the truth is, that was just the beginning.
550
I can’t even go into them all.
551
This video will need a part 2.
552
From launching the second channel, then Beast Gaming, to starting Beast Philanthropy where
553
we could give back on an even bigger scale.
554
Partnering with Amazon for Beast Games, rallying the world with Team Trees in 2019 and Team Seas in 2021.
555
And along the way, we’ve changed the lives of countless people.
556
Sometimes with money.
557
Sometimes with opportunities.
558
Sometimes with their health.
559
It’s been a wild ride.
560
So that’s my growth story.
561
From nothing, to this.
562
But growth never really ends.
563
There’s always a bigger idea.
564
A crazier challenge. A new frontier.
565
So maybe this is the end of part 1.
566
Because the real question isn’t what I’ve done so far, it’s what comes next.
— The Elon Musk Inspired “Movie” made with AI —
1
Some people build companies. Some build machines. I guess I’ve been trying to build the future.
2
I’ve been broke. I’ve been fired. I’ve slept on factory floors and watched rockets explode.
3
But I never cared about being normal. I cared about making life something more.
4
This isn’t a story about money or fame. It’s about obsession. And how far one person will
5
go to push humanity forward, even when everyone thinks he’s lost his mind.
6
I was born in Pretoria, South Africa, 1971. My parents, Errol and May Musk. My dad was an
7
engineer. Brilliant in some ways, but also difficult. My mom, a model, full of life and
8
energy. But our house, it was violent. It wasn’t an uncommon thing to hear shouting,
9
doors slamming, or worse. A lot of people think a rough childhood breaks you. Maybe it does,
10
or maybe it forges you. That’s what it did for me. I didn’t know it at the time,
11
but it was a sort of early test. And I’ve always liked to challenge. I remember trying
12
to understand why people acted the way they did. Why my father could be brilliant, but also cruel.
13
I asked questions no one answered. And my mother, she tried. She really did. But sometimes even love
14
can’t change a pattern. I started thinking a lot. A lot more than other kids. And I realized,
15
if I could figure out the world myself, maybe I wouldn’t have to be part of it the way it was.
16
Maybe I could escape. Not physically, but in a sense, mentally. Build something that
17
mattered. By the time I was five, my mind wouldn’t stop. And that’s when the real work
18
began. Not school, not friends, not the adults around me. But understanding how to make things
19
happen. Ideas. Ideas everywhere. Exploding, never stopping. That’s when I knew, I wasn’t like other
20
kids. By the time I was five or six, I thought I was insane. My brain wouldn’t stop. Ideas kept
21
bouncing around, one after the other. I’d sit in class and stare at the walls, thinking about how
22
the world worked. Atoms, energy, space. Nobody else seemed to see it the way I did. I thought
23
maybe I was broken. But really, I was just different. I started reading everything I
24
could get my hands on. Encyclopedias, science fiction, technical manuals, anything that
25
explained how things worked. And then I tried building. Small things at first. Circuits,
26
little gadgets. Until one day I did something bigger. I can get money for making a computer
27
game? Well, that sounds like fun. By age ten, I discovered computers. At first it was just games.
28
I loved games. But soon I realized I could make my own. I taught myself to code, line by line,
29
late into the night. And at twelve, I sold my first program. A game called Blastar. Five hundred
30
dollars. Not much, but proof. Proof that I could turn ideas into something real. That I could shape
31
the world, not just sit in it. By the time I left South Africa, my mind was always moving,
32
always building, always imagining. I didn’t know exactly what I’d become, but I knew I’d
33
leave something behind. Something big. And that meant leaving the place I called home.
34
South Africa didn’t feel like the place where the future would happen. I loved my family in
35
some ways, but the world outside, it was small, limited, confined. I wanted to go where ideas
36
could grow. Where technology, ambition, and risk weren’t punished, but rewarded. I left home at
37
seventeen. Alone. No safety net, no plan beyond learning and building. I wanted to understand how
38
the universe worked, and how the economy worked, so I could make things happen. Physics and economics
39
became my weapons. I studied at Queen’s University in Canada first, then transferred to the University
40
of Pennsylvania. Dual degrees, physics and economics. Some people said I was crazy trying
41
to do both, but I didn’t care. Understanding the universe and understanding money, that
42
combination would become critical later. If you know the rules of the world, and the rules of
43
energy, you can shape the future. Leaving home wasn’t just about escape, it was preparation.
44
Every challenge, every lonely flight, every sleepless night, I was building something inside
45
me. Resilience, persistence, a mind that wouldn’t quit. And then, in 1995, with a computer, a car,
46
and two thousand dollars, I arrived in Silicon Valley. That’s when the real test began. The world
47
I’d imagined, the one I wanted, it wasn’t going to give itself to me. I had to take it.
48
Here’s what I did. It was 1995, I arrived in Silicon Valley with almost nothing. A beat-up car, a second-hand
49
computer, and two thousand dollars. Everyone here was chasing the next big thing, and I realized I
50
had no safety net, no shortcuts. If I wanted to build the future, I’d have to make it myself. So I
51
tried to get a job at Netscape. No response. I thought, okay, if I can’t find a path, I’ll create
52
my own. That’s when I decided to build Zip2, a platform to help newspapers move online. It wasn’t
53
glamorous, it wasn’t easy, but it was necessary. And necessity is the mother of everything. Starting a
54
company at 24, with negative money in my pocket, is like staring into an abyss while eating glass. I
55
slept on the floor of our rented office. Showers? YMCA. Meals? Whatever I could afford. Every day was
56
a test, every day the possibility of failure was real. We started small, just me, my brother, a
57
friend, and a handful of salespeople working on commission. We went to every newspaper in town,
58
showing them the future of media online. Most ignored us, some laughed, but a few, a few saw
59
the potential. Zip2 grew slowly, then suddenly. We landed deals with bigger newspapers, including
60
the New York Times. Then Compaq came knocking. In 1999, after four years, they bought Zip2 for
61
305 million dollars. I went from sleeping on a couch to having millions overnight. But the money?
62
It wasn’t the point. It was proof. I could turn ideas into reality. Most people would have stopped
63
there. Million dollar car, success at 24, but I wasn’t most people. I wasn’t done. If Zip2 was the
64
first step, the next steps were going to change the world. I’m only here and you’re watching this
65
now through sheer determination and hard work and passion, devoting myself to this. If you’re
66
enjoying my story and this movie, just know it’s not made by any company or big team. It’s one guy,
67
a laptop, months of research, AI tools, and a whole lot of heart. All we ask is that you subscribe to
68
help us keep creating stories. It’s free, takes one second, and helps keep us independent. Thanks.
69
Okay, back to the video. After Zip2, I had money for the first time. Enough to feel like the world
70
is mine, and enough to crash into it, literally. I bought a McLaren F1. It was beautiful, powerful,
71
and terrifying. I crashed it within a year. Lesson learned, speed without control is just chaos. But
72
beyond the cars, it was the feeling of accomplishment that stayed with me. Zip2 proved that ideas could
73
become reality, that I could create value from nothing. It was intoxicating, but it wasn’t enough.
74
I wanted more, something bigger, something that could change the world, not just make headlines.
75
That’s when I started thinking about online banking. People were still mailing checks,
76
writing cash orders, dealing with fees and delays. I thought, this can be better, way better. The
77
future isn’t just in code, or cars, or rockets. It’s in changing how people live, how people
78
interact with money, with information, with the world. In 1999, I founded X.com, an online bank.
79
People laughed. They said, this will never work. But I didn’t care. The rules weren’t set yet. We
80
were building the future, and the future doesn’t wait for permission. X.com would eventually become
81
PayPal, but the road wasn’t smooth. I was fired as CEO while on my honeymoon, a painful blow.
82
Most people would have quit. Most people would have stayed quiet. But not me. I saw the bigger
83
picture. The company would survive, and I would survive. And eventually, we would all win.
84
Losing control of X.com didn’t stop me. If anything, it lit a fire. Because winning isn’t
85
about never failing, it’s about bouncing back. Bigger, faster, smarter. And the next chapter
86
would change everything.
87
In 2002, eBay bought PayPal for 1.5 billion dollars. I walked away with 165 million dollars.
88
For most people, that’s a life-changing fortune. For me, it was just the beginning. Because I
89
didn’t want comfort. I wanted impact. I wanted to do something that could redefine humanity itself.
90
People thought I’d disappear to some beach somewhere. That I’d sip cocktails and play
91
video games. But I wasn’t built for leisure. I was built for the impossible. I wanted to go
92
where no one dared. Literally, to the sky. And beyond. I founded SpaceX with a single goal.
93
Make life multi-planetary. Everyone thought it was insane. Friends, family, investors. They all
94
said the odds were impossible. But history favors the stubborn. The first three rockets failed.
95
Exploded. Burned. Turned into nothing. Each failure cost millions, and a bit of hope.
96
Our team was exhausted. Everyone doubting. I had one last chance. One rocket. One shot.
97
If it failed, SpaceX would end. Then came the fourth launch. Falcon 1. We made orbit. I cried.
98
Not just for me, or the team. For humanity. That little rocket proved that we could defy the
99
impossible. SpaceX succeeded. But the mission was just beginning. The world’s problems weren’t just
100
in the sky. They were on the ground. And the next battle, making Earth sustainable, would demand just
101
as much courage, vision, and stubbornness. Around the same time as SpaceX’s breakthrough, I invested
102
in Tesla. Electric cars. People laughed. They said the world would never switch from gasoline.
103
That EVs were a fantasy. But I wasn’t building cars. I was building the future.2008 was the
104
darkest year. Both SpaceX and Tesla were days, maybe hours, from bankruptcy. Every move was
105
critical. Every decision could be the end. I’d put everything I had into saving both companies.
106
On Christmas Eve, I got a call from NASA. They awarded SpaceX a contract. It wasn’t just money.
107
It was survival. That one contract saved Tesla, too. Suddenly, the impossible became possible.
108
Tesla was still a gamble. Roadblocks everywhere. Investors doubting. Factories failing. But failure
109
wasn’t an option. We had to prove that electric cars could be better, faster, smarter, and more
110
desirable than gasoline cars. It was more than cars. It was a war against the status quo, against
111
inertia, against disbelief. Every mile we drove, every battery we perfected, every skeptical article
112
we endured. It was all part of a bigger mission, changing the way the world moves. With SpaceX
113
launched and Tesla fighting for survival, the path forward was clear. The future wasn’t just a dream.
114
It was something I would build with my own hands, one impossible challenge at a time.
115
After 2008, everything changed. Tesla started to find its feet. SpaceX began launching Falcon 9
116
rockets that actually returned. The dream of reusable rockets, something people said was
117
impossible, became real. But I wasn’t stopping there. The world needed more than cars and rockets.
118
It needed solutions to energy, transportation, and technology itself. I wanted to change the world.
119
I wanted to change the world. I wanted to change the world. I wanted to change the world.
120
I wanted Tesla to be more than a car company. It had to change the way we produce and use energy.
121
Solar, batteries, self-driving technology, all pieces of the same puzzle. If we didn’t build it,
122
no one else would. SpaceX was no different. Every rocket launch was a test, every failure a lesson.
123
I learned to embrace failure as a teacher. The first few Falcon 9s exploded, the next ones barely
124
flew. But each time we got closer, until the day we made it, and we landed it. We weren’t just
125
building rockets or cars, we were rebuilding the future itself. Every innovation was a step toward
126
a world powered by clean energy, connected by satellites, and capable of reaching other planets.
127
We had to show people that the impossible wasn’t just possible, it was inevitable if you were
128
willing to do the work. And so we built. Every sleepless night, every boardroom battle, every
129
rocket, every car, every line of code, it all led to one purpose, proving that humanity can achieve
130
more than it ever imagined. We were no longer surviving, we were leading the charge into
131
tomorrow. But all of this came at a price, a cost most people don’t see. Success isn’t just about
132
innovation, it’s about sacrifice, endurance, and the willingness to face your own limits.
133
Success has a way of taking more than it gives. When people see rockets landing, or Tesla cars
134
on the road, they see the result. They don’t see the sleepless nights, the missed birthdays,
135
the divorces, the stress that never goes away. I’ve paid a price most people would never imagine.
136
There were times I worked 120 hours a week. There were times I didn’t eat properly, didn’t sleep
137
properly, didn’t see my children. You think being a CEO is glamorous? It isn’t. It’s grueling,
138
and it takes a toll on relationships, marriages, friendships, family. Divorces, lawsuits, public
139
scrutiny, all of it was part of the cost. I’ve been called crazy, reckless, egotistical, and maybe I am,
140
but I’ve learned something important. If something is important enough, even if the odds are against
141
you, you pursue it, even if it costs everything else. Sacrifice isn’t glamorous. It’s painful.
142
It’s lonely. It’s relentless. But it’s the cost of pushing humanity forward.
143
The cost of dreaming big and actually building those dreams. Every setback, every broken
144
relationship, every sleepless night, it all became part of the fuel for what was next.
145
Because the world doesn’t wait for the cautious. It waits for the relentless.
146
And if you want to survive, truly survive, you have to be willing to pay the price.
147
Mars wasn’t just a dream. It wasn’t a science experiment or a PR stunt. It was a lifeboat
148
for humanity. I realized that if we stayed on Earth, we risked extinction, war, climate, accidents,
149
technology gone wrong. SpaceX exists so that consciousness survives, so that life can continue.
150
It’s a lifeboat for humanity. I realized that if we stayed on Earth, we risked extinction, war,
151
accidents, technology gone wrong. SpaceX exists so that consciousness survives,
152
so that life continues. The early days were brutal. Our first three rockets, explosions,
153
money gone, hopes dashed, investors calling every day asking for refunds. The fourth launch,
154
that was do or die. If it failed, SpaceX was finished. When that fourth rocket lifted off
155
and reached orbit, it wasn’t just a technical victory. It was survival. It was proof that
156
audacity, sacrifice, and relentless iteration could beat the odds. After that, everything
157
changed. Investors came back. Talented people joined. We could dream bigger. Reusable rockets,
158
Mars colonization, Starship. Humanity’s future wasn’t just in our hands. It was in our rockets.
159
Mars is more than red soil. It’s a chance for a fresh start, a laboratory for innovation,
160
a reminder that survival requires ambition beyond comfort, beyond fear, because the universe doesn’t
161
wait for the cautious. It waits for the bold. And if you want to leave a mark,
162
you have to reach for a planet that’s never been touched.
163
Mars was the dream, but Earth still had problems. Transportation, energy, communication. If we
164
didn’t fix those, we wouldn’t survive long enough to leave for another planet.
165
That’s why I started thinking bigger. Tesla wasn’t just a car company. It was a mission to
166
accelerate sustainable energy. People laughed at me when I said we’d make electric cars desirable,
167
profitable, and even fast. But here we are, changing how the world moves. Starlink was next.
168
Global connectivity. The internet for everyone, everywhere. I realized that to survive,
169
humanity needs access. Not just to information, but to opportunity.
170
Neuralink? That was a different frontier. Human-computer symbiosis. If we don’t enhance
171
cognition, we risk being left behind by AI. It’s about survival, evolution, and understanding our
172
own brains before technology outpaces us. It’s all connected. Tesla, SpaceX, Starlink, Neuralink.
173
Each piece builds the other. Each innovation prepares us for a future where humanity can thrive,
174
even if Earth fails. Some people call me crazy. Maybe I am. But the world doesn’t remember the
175
cautious. It remembers the ones who reach beyond, who risk everything for the future.
176
People call me crazy. They say my ideas are impossible. Maybe they’re right. But the people
177
who shape the future, they always seem crazy at first. I didn’t do this for fame. I didn’t do it
178
for money. I did it because someone had to. Someone had to take the risks, push the boundaries,
179
and build the tools for tomorrow. SpaceX isn’t just rockets. It’s people. It’s people.
180
SpaceX isn’t just rockets. It’s insurance for humanity. Tesla isn’t just cars. It’s
181
the future of energy. Starlink isn’t just satellites. It’s connection. Freedom.
182
Survival. Neuralink isn’t just science. It’s evolution. I’ve faced failures, bankruptcy,
183
public scrutiny, personal sacrifices, sleepless nights, relationships strained,
184
criticism from every direction. But if something is important enough, the odds don’t matter.
185
I am not the hero of this story. The hero is humanity. The one that survives, innovates,
186
reaches for the stars, and refuses to be defined by limits. I’m just building the tools.
187
Maybe I’m crazy. But the people who build the future always are.
— Creating the Viral Ai Talking Baby Videos —
1
So you want to make the AI baby
2
talking video that’s all over your feed?
3
I got you.
4
I’ll show you how to do this in
5
three fast steps.
6
So by the end of this video, you’ll
7
be able to create your own viral AI
8
baby trend using just a few tools.
9
Let’s go.
10
We’re going to cover three steps here.
11
First, AI image.
12
You’re going to need an image for this,
13
obviously.
14
Second, AI voice.
15
Either you create the voice yourself or maybe
16
you’re clipping the audio.
17
I’ll show you both options.
18
And three, the most important bit, lip sync.
19
You need to lip sync this so it
20
looks super, super real.
21
And I’ll show you the tool I’m going
22
to use for that.
23
It’s a good one.
24
Stand by for that.
25
Honestly, you need this tool.
26
Okay, let’s begin.
27
Step one, AI image.
28
I like to use one of two tools
29
here, Midjourney and Runway.
30
So let’s begin with Midjourney.
31
Now, any AI image creation tool will do.
32
Midjourney is my favorite, but if you have
33
a subscription to another, it’ll pretty much do
34
the same thing.
35
Just make sure if you want to have
36
it 16.9 or maybe I want to
37
have mine for a short at 9.16,
38
just do it like that.
39
I will keep mine.
40
Yeah, let’s make it for a short.
41
And if I wanted to resemble someone, for
42
example, me, I want it to be a
43
baby version of me.
44
I could be using this in OmniReference up
45
here and say, create a baby that looks
46
somewhat like this guy.
47
But for most of you, you might want
48
a baby that just looks generally like a
49
baby.
50
So let’s start with that.
51
An image of a baby, two years old
52
male.
53
I say what it is.
54
The primary thing I want in my image,
55
a baby.
56
A baby is a very general term.
57
So given age, one and a half, two
58
years old, most of these babies are, and
59
their gender.
60
And let’s do the location, sat in a
61
modern office at a desk facing camera presenting.
62
I always say something like facing camera or
63
presenting.
64
So it understands that it’s going to present
65
and be probably in the center of shot
66
facing us, which is what we want.
67
If you want them slightly different, then prompt
68
for that.
69
Now you can get specific and say something
70
like they have headphones around their neck and
71
wearing a white top, which is something I
72
want to try and generate.
73
If you don’t, if you leave it, it’s
74
going to be very random.
75
So start dictating for specifics, even hair type,
76
eye color, things like this.
77
Now, if there’s anything else you want in
78
the background of shot, for example, I could
79
say a neon sign saying AI video in
80
the background.
81
Let’s run with that.
82
And here are my images complete.
83
I think I like this one.
84
The most child looking at us almost like
85
they’re presenting AI video in the background.
86
If you like something like this, you could
87
say, I want to vary it strongly.
88
It’ll give you slightly different ones.
89
Subtle might change very small details.
90
Or if I like the style and the
91
image, I could say, Hey, I like it
92
in this style.
93
I like this image pretty much set up,
94
but I want a different version and you
95
can prompt for any differences that you want
96
here.
97
The strong variations on this gave me these
98
as examples.
99
So this one just completed this, this, and
100
this really good.
101
I really like this one.
102
Now I’m going to just download that and
103
keep that.
104
Another tool I like to use is runway
105
ML, which by itself can create image or
106
video or turn your image into video.
107
But what I quite like to do is
108
click here and upload that image as a
109
reference.
110
And I can say, give me a side
111
profile shot.
112
Like you saw in the intro of this
113
video, which sometimes mid journey struggles a little
114
bit more to do, but not always you
115
could pretty much do them in both.
116
So I use either of these tools.
117
Okay.
118
Step one, complete onto step two, you’re going
119
to use real vocals or clone a voice
120
with something like 11 labs.
121
I’ll show you that in a moment, or
122
perhaps you’re clipping it.
123
You can just go and clip that from
124
your video that you’re taking.
125
But if you want to create your own,
126
I’d suggest 11 labs.
127
You’ve got two options.
128
There’s text to speech where I can choose
129
a voice that I want.
130
For example, Alice, and I can say something
131
like this, say something like this, or you
132
can go to voice changer in exactly the
133
same way.
134
Choose somebody click to record my own audio
135
and I can record this and then it
136
will change my voice.
137
And I can record this and then it
138
will change my voice.
139
Now you have your audio.
140
Now onto step three, step three, lip syncing.
141
This is where the magic really happens.
142
I use Hedra to sync because I’ve loads
143
of tools and right now Hedra, I think
144
is the perfect one for this.
145
Let me show you a super simple site
146
to use.
147
You just click on video right here.
148
And then I choose pretty much the audio
149
script.
150
That’s the audio that you have.
151
Just choose to upload your audio there and
152
then choose the frame.
153
I can choose that image that we did
154
earlier, AI child and hit run.
155
And that’s it.
156
Now you have all the elements to make
157
your own AI baby video.
158
Now we’re going to do is drop everything
159
into your editor, CapCut, Premiere, wherever you like
160
it and you’re done.
161
And there you have the AI baby video
162
tutorial.
163
In the next video, I’m recreating.
164
Hey y’all.
165
And it’s wild.
166
On to another AI trend.
— How to Create the Viral Ai Singing Yeti Videos —
1
So, you want to create the viral singing
2
yeti AI video that’s blowing up right now.
3
Let’s break it down in 4 simple steps.
4
So by the end of this video, you’ll
5
know exactly how to generate a talking, singing
6
yeti in just a few AI tools.
7
Let’s go.
8
We’ll do this in 4 steps.
9
Step 1, make the yeti talk.
10
Let’s make a quick intro of the yeti
11
talking to introduce himself.
12
Step 2, let’s make the yeti sing.
13
Let’s get a song with AI, an actual
14
song created by anything you want using this
15
amazing AI tool.
16
Step 3, let’s sync the yeti to sing
17
the song with a couple of tools right
18
here.
19
It’s going to look amazing.
20
Step 4, we can build out the yeti
21
video with AI generated images to videos.
22
You can have a complete AI yeti music
23
video if you want.
24
Okay, step 1, let’s go.
25
Let’s dive into VO3 here and let me
26
just show you the video we’re trying to
27
recreate.
28
So here’s a video that I really liked.
29
This is Bigfoot born to be Bushy, official
30
music video.
31
You should definitely go and check this out.
32
It’s Bigfoot and he’s playing the banjo in
33
the woods.
34
He releases a song.
35
So first he does this.
36
He talks to the crowd.
37
Hey y’all, Bigfoot here.
38
This is my new single.
39
Hope it don’t scare y’all off.
40
Okay, really good.
41
So what I’m going to do, first thing,
42
let’s screenshot this.
43
This is a little bit of a hack.
44
This is a quick way to do this.
45
Let me show you.
46
Now, another tool I want to show you
47
is whisk from Google labs.
48
I’m going to use this.
49
It’s really good at doing something.
50
Just drop that screenshot, any screenshot image you
51
have, drop it into here and it analyzes
52
it just like this.
53
And it gives you the full layout, gives
54
you the full description of what’s needed.
55
It’s got big brown ape, the background release
56
of this at the bottom.
57
There’s a little bit of text here because
58
there’s text on screen.
59
Let’s ignore that.
60
Let’s copy it.
61
And then let’s go into flow.
62
I’m going to do text to video.
63
I’m going to do V03, paste in that
64
prompt.
65
And then all I’m going to do is
66
add a little bit that I want him
67
to say for the intro, like we saw.
68
So he says, Hey y’all, this is
69
my new song all about creating AI video.
70
I hope you love it.
71
And let’s hit run.
72
And then like one minute later, this is
73
the video I have.
74
Take a look at this.
75
Hey y’all, this is my new song
76
all about creating AI video.
77
I hope you love it.
78
V03 is really changing the game.
79
So for these intros, it’s so good.
80
I’ve got sound, I’ve got voiceover, I’ve got
81
lip sync, which is flawless.
82
I’ve got the image that I wanted image
83
to video and an automated prompt because I
84
use WIST based on an image I already
85
had and screenshot.
86
V03 is changing, adding all these AI tools
87
into one.
88
It’s truly incredible.
89
Now on to step two, we want to
90
make ourselves a song, a song that our
91
Yeti can sing along to in our video.
92
So I come over to Suno.
93
Now in Suno, it’s very simple and you
94
get 53 credits a day.
95
That’s 10 songs.
96
Very, very nice.
97
If you come on over, ignore custom, let’s
98
go to simple and let’s say a country
99
song about life as a Yeti, trying to
100
make it as a country singer, funny, comical.
101
I have to make sure right here, I’ve
102
got country songs.
103
So it knows the genre or you could
104
select it down here, what the song is
105
about and also want to make it funny
106
and comical.
107
Then just come down and hit create.
108
And then you’ll get two versions here on
109
the side so I can kick play and
110
I can listen to this.
111
I’m a big old fella from the mountain
112
snow, trading in my fur for a cowboy
113
show.
114
Amazing, talking about what I wanted to and
115
in the right genre.
116
I can click here to download, but make
117
sure if you’re going to use this for
118
commercial purposes, then have a license for this.
119
Now that’s step two complete.
120
The next step, we’re going to get ourselves
121
some images to lip sync this.
122
I want to do one more quick step
123
that a lot of people don’t do before
124
we move on to there.
125
Now the next step we’re doing is lip
126
syncing.
127
So what I want to do is I
128
actually want to get my music track without
129
the background audio because lip syncing will work
130
better if it didn’t have the music behind
131
it.
132
Now to do that, if I just grab
133
my track, I’m in CapCut right here and
134
I can come onto this and then I
135
can click in basic right here, scroll down
136
to isolate voice, let that work is magic.
137
And then what we’re going to do when
138
you come to the edit is you would
139
put over your voice sync yeti and have
140
your actual song in the background and it
141
will all sync up together.
142
Okay, let’s give it a little play.
143
Great.
144
I’ve got this without music behind it.
145
Okay, now onto step three.
146
Step three, I want to sync these together.
147
I’m going to use a hedgerow for that.
148
Hedgerow is an amazing tool.
149
Really good at this.
150
Quite simply, if I come over to video
151
and then I’m going to select audio script
152
and upload the track we just downloaded from
153
CapCut.
154
Then I’m going to choose my image.
155
I want to lip sync this to something
156
like this.
157
I created earlier.
158
Let’s use that.
159
And then all I do is hit run
160
and it will sync these together.
161
That’s completed.
162
So here it is synced.
163
And step four, where I got that image
164
from and to create more images, multiple images,
165
you might want to either lip sync or
166
just have the background for your video.
167
I like to use mid journey.
168
Mid journey is by far probably my favorite
169
image creation tool.
170
To do that, we need a text prompt
171
up here.
172
I start with something like a yeti facing
173
camera, ultra realistic.
174
So now it knows it’s realistic, not animated.
175
They’re facing camera.
176
They’re facing us.
177
And what it is a yeti.
178
You might also want to use Bigfoot for
179
this.
180
Then you go into location, sat on a
181
fallen log in the woods, bright and sunny.
182
And if there’s any specifics you want, for
183
example, if you want him to hold an
184
iPhone, wear headphones, a selfie stick, anything like
185
that, you’d add it here.
186
Now let’s run that.
187
Now here I’ve got a nice variety of
188
shots.
189
If I take a look at these scrolling
190
through, yes, I quite like this one.
191
What I didn’t do is ask it for
192
the different camera type or if he’s holding
193
a banjo or a guitar.
194
So what I do here is when I
195
found the one that I like, I click
196
on it to populate it up here, move
197
into omni reference, which means my next image
198
is going to be taking this person as
199
a reference, the yeti.
200
And I can say close up holding a
201
banjo.
202
I can also say I want in the
203
same style and I can run that.
204
So now they finished.
205
You can see I’ve got the same yeti,
206
that same layout, and I’ve got it in
207
the same style as my previous shot, realistic
208
and the same color tone and palette.
209
So now I could use that as exactly
210
the same style reference and omni reference to
211
put them inside a honky tonk, inside a
212
bar, playing a concert, wherever it is you
213
want them to get the same feel, the
214
same person and style every single time to
215
generate more shots.
216
You can even limp sync these in the
217
same way like we did in Hedra earlier,
218
or you could just be having shots as
219
your music video.
220
And that is it complete.
221
You can now make your viral yeti music
222
video and I’ll see you on the very
223
next viral AI video.
— Creating the ASMR Viral Glass Cutting Fruit (weird but amazing!) —
1
There’s a viral AI trend going around that’s
2
super simple to create with just one tool
3
and one text prompt and actually there’s even
4
an automated way to get that.
5
I’ll show you that in a moment.
6
This ASMR or AI SMR, glass fruit cutting,
7
taken over socials like YouTube, TikTok and Instagram.
8
Super satisfying to watch and really simple to
9
make.
10
In just a couple of steps I’ll show
11
you now.
12
So first we’re going to go into flow
13
to use VO3 which will look something like
14
this once signed in.
15
Just click new project and quite simply I
16
could just put a text prompt in here
17
about cutting glass fruit.
18
We’ll do that and compare in a moment
19
but there’s also another way you can do
20
this.
21
For example, if I really like this shot
22
right here, I could take a screenshot just
23
like this and let me quickly show you
24
another tool from Google Labs called WISC and
25
this is what I use this for.
26
This tool is primarily made to make images
27
or put elements together like subject, scene and
28
style but if I just grab that image
29
and I drop it into here and I
30
just wait for that to analyze and now
31
if I click this icon in the corner,
32
I can see this has exactly described what
33
it is.
34
The image shows a close-up of a
35
person hand holding and cutting a pomegranate.
36
It hasn’t registered as glass but that’s okay.
37
I’m going to put that in.
38
Now why is this good?
39
Of course you could be using something like
40
ChatGPT but this is a Google product so
41
we know that this is what Google is
42
seeing in this image and it’s a great
43
way to describe it.
44
So let’s just copy all of that.
45
If I come back to the floor, I’m
46
going to paste that and just add in
47
a pomegranate made of glass.
48
With a knife, the person’s skin tone appears
49
light, the hands are positioned carefully, all made
50
of glass and then it’s going to have
51
something like a video player interface is at
52
the bottom of the image because it can
53
see that in our screenshot.
54
So let’s just delete that.
55
Now I’m also going to add the sounds
56
of a knife cutting through glass.
57
Now make sure I’m selecting I want VO3
58
because with VO3, I can get sound effects
59
with this audio.
60
Okay and let’s hit run and now they’ve
61
completed.
62
Let’s take a little look at these.
63
Okay and the next one.
64
Nice, so these are taking kind of chunks
65
from these.
66
It’s about to turn into another shot right
67
there but look how realistic they look and
68
the sounds are perfect also.
69
So I’m going to just click right down
70
here to regenerate that prompt in here and
71
I’m going to make sure it says to
72
slice completely through it in one movement.
73
Now changes to the knife cuts through the
74
pomegranate in one motion.
75
Smooth revealing numerous shiny plump aerials inside all
76
made of glass.
77
I’m going to remove the barrier it says
78
that some have already fallen out and let’s
79
run this to compare.
80
Now they’ve finished generating because I took away
81
the prompt that mentioned the kitchen sideboard it’s
82
done it like this.
83
This one’s in the person’s hands.
84
I actually like this one better.
85
I almost want to watch more because it’s
86
in their hand.
87
Oh and the sound effects are so nice
88
and this one right here is on the
89
side.
90
Super super satisfying.
91
Now we can do this just for text
92
prompt ourselves.
93
We need to think about several points here.
94
First is the main subject a glass banana
95
a banana made of glass yellow in color
96
on a kitchen worktop.
97
So I’ve said this twice I often do
98
that AI tools glass banana a banana made
99
of glass.
100
Also glass is transparent primarily so I’ve made
101
sure to say yellow in color on a
102
kitchen worktop.
103
Then I’m getting into action a large kitchen
104
knife cuts through the banana in one smooth
105
motion close up.
106
I want this in close up I don’t
107
want a long shot from far away so
108
I’m telling it what’s happening and what the
109
motion is.
110
I could also describe the kitchen knife if
111
I want to or the person’s hand but
112
I’m gonna just leave it up to Veo
113
here.
114
Next sound I want for this you could
115
have someone speaking if you were doing a
116
different style video and put that in there.
117
Veo is amazing like that.
118
The sound of the knife cutting through glass.
119
Here’s the sound I want and lastly I
120
mentioned the background.
121
The background is blurred out of focus and
122
dark and let’s hit run.
123
These have generated let’s take a little look.
124
Oh super satisfying.
125
Okay and the other one.
126
I love it.
127
So those are elements needed for a text
128
prompt.
129
So you can either use Wix to automate
130
this slightly and it’ll get you most of
131
the details or just follow that text prompt
132
guide.
133
Now the advances of using Veo as opposed
134
to something like Runway which is another tool
135
that I love which you could also you
136
could also add in a reference image right
137
down here or we could put in the
138
same text prompt paste that into here to
139
generate an image.
140
I could take an image that I like
141
something like this and let’s say okay let’s
142
use that for video and I can say
143
knife cuts through glass banana.
144
Here’s the video generated by Runway.
145
Oh it’s cutting through it but it’s also
146
leaking by the looks of it.
147
It’s just not quite as good as the
148
Veo prompt all-in-one is it and
149
you could also do the same thing in
150
Mid Journey and I could turn this into
151
video quite simply in the same way but
152
then with both of these there’s no audio
153
so you’d have to use something like 11
154
labs in sound effects and get knife cutting
155
through glass and then put them all together
156
inside your editor or if you have access
157
to Veo you can do all of them
158
in one.
159
Now that’s an idea for a viral channel
160
cutting through glass ASMR and you could be
161
cutting through all kinds of things glass sushi
162
glass food whatever it is that you wanted
163
to do and of course you could have
164
these in 16.9 or 9.16 for
165
whatever preference you have for your social media.
166
See you on the next AI viral trend.
— How to Make the Influencers Doing Crazy Stunts Viral AI Videos —
1
This viral AI video is going around of influencers doing crazy, crazy stunts.
2
I’ll show you how you make this with AI in just two simple steps, a real quick tutorial.
3
So by the end of this video, you’ll be able to make your own and upload yourself having
4
your own AI channel doing this.
5
Okay, let’s start. Step one.
6
So before we get into actually making this, I would need an idea.
7
We obviously need an idea for what we’re going to do.
8
Now what I do is something like this.
9
If I grab a video that I like much like this by the door brothers called influencers or
10
influenders, this is a really funny video.
11
You should check this out.
12
It’s like the end of the world of influencers doing stunts like this guy who does Bitcoin
13
while there’s obviously the world’s ending behind him a Bitcoin plug.
14
This collapse is literally the perfect dip.
15
I’m buying more right now.
16
Okay, grab that URL.
17
Now I use Gemini for the issue could be using chat GPT or anything else that you’d like to use.
18
I tend to stick with Google products because I’m going to use a Google product VO three
19
in a moment to create this.
20
And I say, I want to create a video like this and paste in that URL.
21
Now does that make any difference?
22
Does it understand what the video is about or anything like that?
23
Possibly YouTube is also a Google product. Let’s see.
24
Then I give it a bit of context and say the taller one too.
25
So I want to create a video using VO three, we get to why I mentioned that later.
26
And there are scenes of influencers doing crazy unreal stunts.
27
So it understands it doesn’t have to keep it in the real world.
28
It can be unreal.
29
And I want to mention the comedy is that these stunts are ridiculous and highlighting the
30
lamps and influence it would go to for views.
31
Once again, I often say this twice, funny, comical.
32
Then I want to ask for ideas.
33
I get into details here and what I need, my needs for this prompt.
34
I need ideas for a short eight second scenes.
35
That’s what I’m going to create with VO like an influencer getting in a bath of cement
36
until it hardens, jumps into a volcano for views, jumps from an airplane with no parachute.
37
Those are examples, but I’ve done this before and I want to make sure that it doesn’t take
38
those exact examples.
39
So I say not these examples, but new funny ideas based on trends, influencers do and
40
are impossible, crazy or ridiculous.
41
Give me 10 ideas.
42
So here’s the prompt about what I want.
43
Here’s the background. Let’s run that.
44
And then we’re going to, after that, get the prompt to actually use in VO.
45
So it’s all automated almost. Okay.
46
It’s generated some ideas.
47
Let me go through a few.
48
The extreme unboxing.
49
So it does understand that they do unboxing.
50
This time they’re doing it on the side of a skyscraper, nearly dropping the phone multiple times.
51
The DIY space travel influencer attempts to launch himself into space using a homemade rocket. Okay, great.
52
The deep sea dance challenge influencer attempts to viral dance challenge at the bottom of okay.
53
At the bottom of that with the submarine. Okay. Mount Everest.
54
Bury themselves for a digital detox.
55
I feel that could actually happen.
56
I think Mr. Beast has done that actually pet whisperer trains, a wild bear and a pack of
57
walls using positive affirmations or a runway show on a tightrope.
58
And if you have a good ones, but I think I like DIY space travel influence for attempts
59
to launch themselves into space using a homemade rocket powered by energy drinks and mentors. Yeah.
60
Predictably disastrous and comical results.
61
That’s really good.
62
So now we can create that inside flow right here, but the next short step is I need to
63
put in, if I go new project, I’ll need to put in a text prompt for this to sell it what
64
I want it to create.
65
So we can actually do that rather than type it ourselves.
66
Use Google Gemini for that too.
67
So I say for this eight second scene and I just paste that in there, create a text prompt for VO three.
68
Now I could just run it there, but I want to give it some specific details.
69
So I say create a text prompt for VO three and then the influencer is Gen Z and speaks
70
as such use Gen Z language and high energy.
71
So we’ve got that really influencer style energy and make sure they understand. I want that.
72
I want to have the bottle huge that they’re strapped to and they explain the challenge
73
in their dialogue.
74
And I want to reiterate all in less than eight seconds.
75
I’ve done this before and it gives me more than eight seconds for dialogue and it will
76
cut off inside VO three.
77
And I’m going to say describe the influencer in detail.
78
So they have everything that they need for this prompt. Let’s run that.
79
And it’s given me this great prompt broken down into scene dialogue and the visuals seen
80
is an extreme closeup of a Gen Z influencer, Leo Astro vibes, Jen, late teens, early twenties
81
bought a neon green hair, a fade.
82
It describes him, describes he has a huge bottle strapped to him, gigantic house-sized plastic bottle.
83
And then we’ve got his dialogue here.
84
Yo, what’s up cosmic voyagers have a viral challenge space now ultimate send it fam.
85
We’re talking pure energy drink and Mentos propulsion blasting off to touch the stars.
86
Get ready for the craziest launch party ever.
87
Wish me luck for real.
88
There’s going to be epic. Good.
89
That’s the language visuals.
90
The camera quickly pans from his face down to ridiculously oversized bottle he’s strapped
91
to showing the crude setup. Brilliant. Okay.
92
Let’s take all of this and let’s paste it into VO.
93
So under text to video, I’m just going to paste that in, make sure I have VO three selected
94
because I want dialogue with this VO two would not have that if you had another option selected.
95
So make sure you in VO three and hit run. Okay.
96
This is finished generating already.
97
Now I’ll be very surprised if it gets this first time because it was a very complex prompt.
98
It’s not like I’m saying influencer gets into a bath of jello or something.
99
Let’s play it and see.
100
Yo, what up cosmic voyagers?
101
Astro vibes here.
102
So like everyone’s doing about to go viral challenges, right?
103
But are they going to space?
104
Nah, this is the ultimate. Send it fam. Okay.
105
It was pretty good, but it didn’t quite get what’s happening.
106
There was a weird splash there, but all this detail is there a massive bottle, all these
107
energy drinks, et cetera, et cetera.
108
So let’s work on the prompt and have another go.
109
Now to make this clear, what I’ve said is he has a gigantic Coca-Cola bottle strapped
110
to his back for pack of Mentos in his hands.
111
And then he says he’s about to go to space.
112
I’m going to drop these Mentos in this Coke bottle and see how far it takes me.
113
Let’s get ready for the craziest launch party ever. Wish me luck.
114
This is going to be epic.
115
He stood in a field open space where he tries to take off.
116
Now this aligns a little better to the viral trends.
117
I think where people drop Mentos into Coke and to see the bottles explode. Okay.
118
I’ve not watched this, but I can see it loads and I can see Mentos in his hands and a Coke
119
strapped to his back.
120
Let’s have a look and play this together.
121
Yo, what up cosmic voyagers?
122
Astro vibes here.
123
So like everyone’s doing about to go viral challenges, right?
124
Are they going to space? Nice.
125
So that worked really well.
126
I just wanted to have the ending, so I must have too much text in here.
127
So let’s just click over here and let’s repopulate that.
128
Let me remove some of this text so it’s shorter.
129
Now it says, yo, Astro vibes here.
130
Today we’re trying to go to space.
131
Let’s drop Mentos in his Coke bottle and see how far it takes me.
132
Anything for the gram.
133
That’s still quite a bit of text, but let’s run this and try again.
134
Next generation.
135
Let’s take a little look.
136
Yo, Astro vibes here.
137
Today we’re trying to go to space.
138
Let’s drop Mentos into this Coke bottle and see how far it takes me.
139
Anything for the gram. Exactly.
140
Exactly what I’m trying to get.
141
So if you want to download this, hit download upscale to 1080 and when it’s ready, it will
142
work out right here and you can click download.
143
So that’s how I do these influencer scenes step by step, including failed approaches
144
or ones that need work.
145
And you can see how I go from idea through Gemini all the way through to tweaking the
146
text prompt to try and get exactly what I want.
147
So now you’ll be able to go ahead and make your own scenes, your own ideas for influencers
148
gone wild, and you’ll be able to make your own channel doing this. Okay.
149
I’ll see you there on another AI video.
— Stormtrooper Vlogs and Videos with AI – How to Make these —
1
Now viral AI Stormtrooper videos just like this are going viral right now.
2
Alright boys, orders just came in from Vader’s — — and me and Greg are headed to Endor.
3
I’ll show you how to make these in just one, maybe two simple steps, really quick tutorial
4
right now. So by the end of this you’ll be able to make your own Stormtrooper AI videos
5
and have your own AI channel doing this.
6
Okay let’s go step one. So to make this we’re actually going to use VO3 by Google. I’m going
7
to click new project and right here I can go text a video, type in a text prompt, make
8
sure I’m using VO3 because I want audio. VO3 allows audio, VO2 right now does not.
9
Now I can just type in my own idea right here. Or I can go to someone like Gemini and I can
10
say I want to create an 8 second video using VO3. That’s how long the clips are right now.
11
It’s comedy, I let them know it’s going to be funny. Video about Stormtroopers, so I’ve
12
got my topic here and my subject. Vlogging like influencers. That was meant to be.
13
Create me a prompt that describes Stormtroopers filming a vlog, holding a camera, doing a
14
vlog intro. Make sure that the text can fit within 8 seconds of video. Make it funny.
15
So I’m asking for the prompt. I don’t have to type it myself. We can get Gemini, a Google
16
product to tell Flo what it is with a text prompt. So I’m going to do that and I’m going
17
to make sure it fully describes the scene and details for a VO3 text prompt. Let’s see
18
what it puts out. What I haven’t done is told it the topic of what they’re saying.
19
I want to see what AI, what Gemini comes up with, if it’s making dialogue for them to
20
say for an intro. Option one, classic influencer parody. They’re inside the Imperial Corridor
21
right here, holding the camera. What’s up, Empire? Your favorite galactic content creators
22
are back with another fresh take. Thumbs up. OK, nice. Here’s one with relatable imperfection.
23
They bump heads and welcome back to Trooper Tales. OK, I like the first one. Let’s run
24
with that. Now, what normally happens is I paste this in and we see when we’ve generated
25
already the imperfections or incorrect things that were added inside the prompt and what
26
we need to change. So I always run this first, see what the outcome is from VO3, and then
27
I can see where I was missing or what I was missing inside my text prompt to then go in
28
and manually change that. We can also do a complete manual text prompt for a whole nother
29
scene. I’ll do that in a moment. OK, that’s generated already. Let’s take a look at this.
30
What’s up, Empire? Your favorite galactic content creators are back with another fresh
31
take. All right, nice. The only thing is that this ring light is facing the wrong way, although
32
it could be light on both sides. And you’ve got these guys doing an intro for a vlog.
33
Very, very funny. Now we can see the camera and can see what they’re doing. It’s like
34
behind the scenes almost, rather than we are the view. So I could prompt for that. That
35
would just be if I reprompt over here, I could say to make sure that they are holding the
36
camera or we’re looking from the point of view of the camera. I could say from the point
37
of view, a stormtrooper grins awkwardly at the lens while another stormtrooper adjusts
38
a ring light in the background. OK, let’s run that one. What’s up, Empire? Your favorite
39
galactic content creators are back with another fresh take. That’s funny. You can even hear
40
the little bit of music in the background, the dun dun, and they’re speaking with as
41
if they’re inside a mask. VO3 knows everything. It’s so intelligent. I’m going to try my own
42
prompt right here and see what you think. I’m going to say a Mr. Beast style intro to
43
a YouTube video. High energy. A stormtrooper presents the camera and says, we’re back with
44
another challenge. The last of these guys to leave the circle wins $5 trillion. In the
45
background is a bunch of 10 or so stormtroopers squeezed inside a red circle painted on the
46
floor. The setting is inside a spaceship. Big open bright room. Run. So this is a play
47
on Mr. Beast’s videos where he has like a red circle and the last to leave it wins money.
48
Not $5 trillion. Obviously, that’s quite a complex prompt. Let’s see how VO3 does. OK,
49
let’s play this and see if there’s anything missing from the text prompt based on what
50
I see. We’re back with another challenge. The last of these guys to leave the circle
51
wins $5 trillion. OK, that was good, but they were outside of the circle a little bit. I
52
could probably just run the exact same prompt. I think I say inside here, squeezed inside
53
a red circle painted on the floor, the setting inside a spaceship, big open room. I like
54
that. And then I’m going to have to say one of the other stormtroopers says in a surprised
55
voice, what really? And let’s see if it puts them inside the circle. If not, I would prompt
56
for this again. OK, let’s see this. Oh, yeah, it’s got a red circle as opposed. I said red
57
circle outlines. You could reprompt for that. But let’s see if the text was really good,
58
if the dialogue worked. We’re back with another challenge. The last of these guys to leave
59
the circle wins $5 trillion. What? What really? OK, so we get in there. So that is a video
60
where you’d have to reprompt and reprompt and make sure your prompts are even clearer.
61
You can see if you just keep going, you’re going to get exactly what it is that you want.
62
Stormtrooper intros, viral videos, really funny. You can now go ahead and make your
63
own videos doing this using VO3 and Gemini if you want to create a text prompt and have
64
your own channel doing this.
— Labubu Brought to Life with AI – AI video Idea! —
1
So here’s an idea for a viral AI
2
video.
3
If you’ve been living under a rock and
4
you don’t know what LaBooBoo is, it’s this
5
toy that’s just gone crazy and taken over
6
everywhere.
7
Now using AI we can make a video
8
about this.
9
Think of funny small sketches for social medias
10
or YouTube, TikTok, etc.
11
And I’ll show you how to do it
12
with just two tools.
13
So let’s begin.
14
Now the first thing we’re going to want
15
to do is we’re going to want to
16
create an image first.
17
Now for that I like to use MidJourney.
18
But if I prompt something like this.
19
I’ve been testing earlier.
20
LaBooBoo toy doing a selfie.
21
Point of view from the camera.
22
Bright image.
23
YouTube thumbnail.
24
The backgrounds bright blues and pinks.
25
You can see that I don’t actually get
26
the toy.
27
It doesn’t look like the specific toy.
28
Even when I’m using, if I click here,
29
I can see that in OmniReference I’ve actually
30
put the toy.
31
It doesn’t work.
32
It doesn’t always work.
33
But it did eventually if I come through
34
my generations.
35
Here you see this one, this one, this
36
one, this one especially.
37
And this one definitely did.
38
So what did I do?
39
The first thing I did, I come up
40
here.
41
I go add image and I make sure
42
I add the toy on a white clear
43
background.
44
Move that over into OmniReference.
45
And then I prompt for what I want.
46
Start at a desk talking to camera.
47
Now obviously you need to go into more
48
details.
49
Tell it what the room’s like.
50
If it’s an office, modern office, children’s bedroom,
51
whatever it is.
52
For the sake of this tutorial, I’m going
53
to leave it really, really plain.
54
We prompt for that and wait for it
55
to generate.
56
Now still, even though we use OmniReference, sometimes
57
you can see it just changes, which is
58
probably down to my settings allowing this little
59
bit of stylization.
60
But these are pretty good.
61
Yeah, I didn’t prompt for anything in the
62
background.
63
You would want to just make sure you
64
prompt.
65
This one’s good and this one’s good.
66
I’m using an image I made earlier.
67
I really like this one.
68
Okay, let’s download it.
69
The next step, we’re going to come into
70
headwear.
71
Headwear is one of my favorite tools.
72
If I click video, this is where we
73
can lip sync and make this toy talk.
74
So if I add my frame first, upload
75
image.
76
Here he is.
77
This is my little image right here.
78
Now I want to add audio.
79
Now I could use an external tool like
80
11 labs to create audio, or I can
81
actually do it inside here.
82
If I go generate speech, let’s choose a
83
voice here.
84
Click play and you can have a little
85
listen of these.
86
Okay, I like this voice right here.
87
It kind of suits, I think, the toy
88
right now.
89
I’m going to type a text prompt.
90
A simple prompt here.
91
Hey, welcome to my vlog.
92
Don’t forget to like and subscribe.
93
It’s on auto selected for the language.
94
I don’t need to do any of that.
95
And we can hit add to video.
96
Now I can just generate this and let’s
97
wait for the generation to come.
98
You can imagine if this works and the
99
animal gets to speak.
100
The toy, sorry, not animal, sorry.
101
Lububu lovers.
102
You could create your whole vlogs with these
103
guys.
104
You could be making stories about whatever you
105
wanted to.
106
I thought about having a documentary style about
107
them laughing at themselves about how viral they
108
are.
109
And you could put them in proper documentary
110
real world settings.
111
Okay, that’s finished generating.
112
Let’s take a look at it.
113
Hey, welcome to my vlog.
114
Don’t forget to like and subscribe.
115
Okay, so we’ve got a synced character.
116
Lububu actually talking.
117
Somebody has to make this series.
118
It’s gone wild.
119
This is an idea that you can use.
120
So this was a quick tutorial just on
121
how to create this, maybe an idea for
122
a channel and how to use a couple
123
of great tools to create them.
124
Transcribed by https://otter.ai
— Editing Essentials: My AI Project Overview —
1
So, on to the editing section of the course. I just want to give you a quick rundown of
2
what is and isn’t possible currently with AI and what you can expect. No, right now
3
we are not in a position to just dump everything on a timeline and say, please make a story
4
out of this. Not possible just yet, but that will be the case shortly, I’m sure. Now, there
5
are some great things that AI can do. I’m going to show you in this course things like
6
CapCut so we can do like captions for things automatically, AI generated. Also, we can
7
do some effects like this is me at the desk here. There’s me right there. You see some
8
effects inside here. And we want to do things like get rid of, if I show you here, look
9
up in the top left corner, you see there’s a watermark right there for hapier. How to
10
get rid of that either with AI or inside your edit. And then I’m going to give you
11
some quick tips on things like when you’re doing titles, transitions, color grading,
12
changing the color of things and edit kind of hacks or secrets or things that I like
13
to use and think about when I’m editing AI video. So it doesn’t matter which tool you’re
14
using these all apply for everything. You could just be using free CapCut software.
15
You could be using Premiere Pro, Final Cut, anything that you wanted to. It applies for
16
all. This is the editing section.
— AI Video Tools: Say Goodbye to Watermarks —
1
Now, one thing you might come up against
2
when you’re creating AI videos, and this all
3
depends on the tool you’re using and the
4
package you have, but you may get watermarks
5
that come up in your videos, which are
6
really, really frustrating, obviously.
7
Let me give you an example now.
8
For example, this clip right here, an exploded
9
clip inside Pika, which if you remember, we
10
did a couple of sections ago, you can
11
see that it has this watermark here, and
12
then it will move to here to make
13
sure you don’t just remove that quite simply.
14
Yeah, it’s up here, and now it’s down
15
here, you see?
16
Now different tools have these depending, if you
17
upgrade, then you can obviously remove those, but
18
it’s to protect the tools from having you
19
use their tools either for free or extremely
20
cheap, and then them not getting any recognition
21
for it.
22
It’s kind of a trade-off.
23
Now, there are several ways to deal with
24
this.
25
There are things like, you could search AI
26
video watermark remover, a lot of the big
27
tools don’t have these inside for obvious reasons,
28
they don’t want you to remove them.
29
But if I just go to this first
30
one here, VMake AI, or there’s loads and
31
loads on here, I can upload that exact
32
clip, and you see it right there, there
33
it is, and then if I play their
34
version where they said they’ve removed it, I
35
can actually still see it here, but after
36
a moment, when it explodes, boom, now it’s
37
gone.
38
If I play this carefully, you can see
39
there is just a bit of blurring, you
40
can see it a bit fuzzy here, but
41
if you’re not looking for it, that does
42
help, and you could just replace this with
43
your still image that you have, although the
44
color might be slightly off.
45
There’s a way that I deal with watermarks,
46
and it all depends.
47
If you have a moving shot like this,
48
much harder to do, I would use a
49
tool like this and just clip that bit
50
out there you want to get rid of,
51
but you could just use your editing suite.
52
So here’s this clip that I used, do
53
you remember when I was creating our course
54
project, and for some reason, Runway wouldn’t let
55
me generate this child drawing, it blocked it
56
for whatever reason, so I used Hapier, but
57
the version I have of Hapier has a
58
watermark.
59
Now this is a much more simple shot
60
to cover, and I’ll show you how I
61
do it.
62
So here you see there’s barely any movement,
63
that’s great.
64
Now the way to cover this, this is
65
the video clip right here, this is the
66
still image that I have, if I just
67
move this here, I’ll just move that back
68
down, you can see that this is the
69
still image that I have.
70
It doesn’t move, let me just play that,
71
and if I just turn that off so
72
you can’t see it, you can see it’s
73
the clip below it.
74
Now there’s a small amount of arm movement
75
obviously, so it doesn’t match exactly, but that’s
76
only a tiny little bit of the top
77
there, and the top right, so it’s very
78
easy for me to fix this.
79
Any editing tool will have this, you could
80
either just crop it, clip it, inside Premiere
81
here I go to Opacity, I’m just going
82
to select this tool right here, you see
83
that right there, and I can just move
84
it up, move that in, I only need
85
a small amount, the smaller you can make
86
it the better.
87
I can also just make the feathering go
88
wider so it blends like that, take that
89
away, do I see the clip?
90
No, so now it does this.
91
And I’ve got rid of my watermark, so
92
you can be quite inventive with this, obviously
93
the more movement there is, then the worse
94
that looks.
95
I’ve had shots and clips where I’ve had
96
to make the shot match it on its
97
first frame and on the end frame zoom
98
it in and match it exactly so it
99
doesn’t show up.
100
Obviously shots where you’re not moving if you
101
have watermarks, that’s the easiest way, using your
102
still image right there, and then cropping it
103
over with either a cropping, clip tool, opacity,
104
all editing softwares will have the ability to
105
do something like this.
106
So that was my tips and my ways
107
of how I remove watermarks.
— AI-Generated Captions: A Step-by-Step Guide —
1
Now, something a lot of people want to do in videos, especially if you are creating
2
for social medias, is to add captions. That is what you might call subtitles, closed captions
3
right here at the bottom of screen. Or perhaps if you’re doing vertical, you put them in
4
the middle so that nothing is interrupting these. They’re a really great way to aid retention.
5
I once heard this great thing from Ryan Trahan, who is a YouTube creator. He said if somebody
6
is watching the screen, is fast moving and reading the screen, it’s very hard for them
7
to stop watching. So captions can really help aid retention. Now, loads and loads of tools
8
have this. I’m going to show you inside CapCut here. It’s a free editing tool. You could
9
use Filmora that I’ve shown you earlier or any of the editing tools really will have
10
this now. And is it so much an AI tool? Yes, it’s definitely using artificial intelligence
11
to aid with its detection of captions and what you’re saying. And it’s super simple.
12
Even inside your editing software, you can probably add closed captions if you’re using
13
something like Premiere. But this is a lot easier. I often use them inside CapCut or
14
Filmora because they’re made for this. These editing suites are made for social media.
15
So if I grab this clip right here, let me just show you. You’ve probably seen this lecture
16
before. It’s Bon Sans section two, early on in the course. If I go up to the top and just
17
click captions here, I don’t need to do any bilingual or this other stuff. I can just
18
generate. If I had a pro plan, I can do things like highlight certain keywords. I can have
19
it identify filler words and all these other things that I can do. But look, as quick as
20
that real time, I was still talking. It’s done it for me. So you can see along the
21
bottom right here, if I just play this onto a fundamentals in this section, we’re going
22
to go over some of the backbone and it’s done. And it’s always really amazing. Like it’s
23
always very accurate. Now, of course, you can’t see that too well because of the color
24
and size. So I can do things like I can change the font size if I want to like here, I could
25
bold it out. What should I try color? Maybe this should be let’s go bright. Yeah, let’s
26
go bright red for this. I’m applying this to everything. So all my captions are changing
27
one at a time. My alignment is in the center. Of course, there’s also some other effects
28
up here like bubble. If I want to put this into like a speech bubble or something. Obviously,
29
that’s not something I’m doing for this video. Let’s have a look at some effects. Now these
30
are pro these look at this, if I put the stroke around it, you can see it more easily.
31
I quite like the yellow but not on this color. I could add this to make it easier. So you
32
can play with all different setups or some of these are really quite nice. Not for this.
33
That’s like Tick Tock. So in font, isn’t it? Not for this one. But some of these are really,
34
really nice. So I can add these and now let me play that a little bit for you right there
35
on to AI fundamentals. In this section, we’re going to go over some of the backbone knowledge
36
that you need regarding AI. Please don’t skip through this, you’re going to know. So that’s
37
done automatically for you. Super amazing, super quick. Thank goodness this exists. So
38
this was an example in CapCut. Like I said, if you need to use these, you could download
39
this software for free and play with it upgrade if you need to. If you want to join pro to
40
have more access to things. I use this tool and filmora for captions if I ever need it.
41
And if you’re doing social media stuff, you definitely will, or probably will. And if
42
you are wanting sometimes I just put it at the beginning of my videos, you know, when
43
you’re doing a quick intro and you’re like, Hey, welcome, blah, blah, blah. And I just
44
have it for the first 1015 seconds, your most crucial point to get people keep people watching
45
and retain that. So I do it for the beginning and not for the rest of the video, completely
46
up to you. I just thought I’d show you this because a lot of people were asking AI if
47
you like AI captions automated generated right here inside CapCut.
— Transforming Videos with AI Effects —
1
Now, whilst we’re inside the editing software here, CapCut and lots of different tools,
2
this all depends on the editing software you’re using. We’ll have perhaps AI editing features
3
and effects available inside them. Now, I would strongly suggest you just do this with
4
your images we’re creating and video in tools like runway and mid journey and things we’ve
5
been doing. But I do have to show you because these will get more and more advanced as time
6
goes on. Platforms like CapCut are especially moving very quickly with this. And so I feel
7
more and some other tools like that. If I go up over to here on my feed, if I’m selecting
8
my shot, 10 seconds maximum, unfortunately, if I go up to AI stylize, I can see all these
9
different styles right here. And I can see the prompt, cyberpunk style girls, stunningly
10
beautiful tech clothes. Okay, so what if I just choose one of these fashion photo photography?
11
Alright, so if I choose one of these, and I’m using my clip 10 seconds here, and I click
12
generate, it should generate in this style. There are other things I can do right here,
13
which I’d have to allow my images to be uploaded to I don’t need to show you that we’ve done
14
lots of image generation, but I can turn any of my photos into these different styles.
15
So let’s give this a try comics to and let’s generate. Now, if you just want to try this
16
out, you don’t have to subscribe to the premium version to subscribe to a monthly billing,
17
you can for one US dollar have one trial. So if you think, oh, maybe this is something
18
that I want to try out first, see how good it is. Well, for $1, you could just try it
19
out, pay PayPal or credit card and then test it out. And the most you’ve lost is one US
20
dollar to test to see if this works. And here we have it, you can see that it’s applied
21
that same effect actually looks really, really good. Let me just play this from the beginning.
22
Oh, I start as a woman because I in the prompt and it said woman, but then it changes to
23
me definitely there. That’s a really nice shot that way. And then I’m a woman. So in
24
the just note to yourself when you’re doing the prompting, make sure it came up with another
25
suggestion. Are you sure you want to prompt this change it to man because I was thinking
26
it was going to take just the effects the cyberpunk effect, but it actually took the
27
exact prompt female. It’s a female just change it to male but have a look or if you are female
28
then keep it female. But look how good that looks. Wow, like a really amazing animated
29
cyberpunk that looks incredible. Really good. So just showing you some of the AI tools that
30
AI stylized tools are available in things like CapCut here. And these will just keep
31
growing across all different tools and get more and more and more.
— Course Project: Editing —
1
So, the editing part of this course, this is not AI-centric. And earlier on in the course,
2
you saw me link videos on how to use specific tools. If you want to use Premiere Pro like
3
I’m using now, this is very familiar to you. You’ll see what it is that I’m doing and understand
4
based on that tutorial earlier. But of course, use CapCut, Filmora, use Final Cut, use whatever
5
tool you want, because we’ve made videos already inside of any of the tools you were using.
6
It’s just a drag and drop and you choosing things like your transitions, how long you
7
hold shots for, etc. I’m going to go over some basics that I like to think about when
8
I’m editing. I’ll show you a few of these, and then I’ll get on with it and edit. You
9
don’t have to watch me in real time. So, the things I like to think about are titles, which
10
you definitely want, of course, the grade and the edit with regards timing. So, first
11
things first, if I drag across, you saw me when I was making these, I just plunked in
12
this title right here. And what I want to do is add that this is at Pearl Harbor and
13
then the date that it’s happening. And this is in Hiroshima or close by. And this is the
14
date. OK, so I’m just going to change those over. Let’s edit those texts. So quite simply,
15
Pearl Harbor. Let’s just scale that to fit better. Let’s do it right in the middle, I
16
think. And then I’m also going to do if I just move my effects panel right there. Let’s
17
just do a little bit inside here of a drop shadow. Let’s put that in. You won’t see much
18
that at all. Turn up the opacity and the softness. But you’ll see with and without
19
it. I’ll just, for example, I turn the opacity down to zero and then put it up to one hundred.
20
Just the tiniest bit. Not much at all, but just to make it stand out on things like here
21
where the light gets lost. So I put Pearl Harbor and then let’s change this day here.
22
It was December 7, 1941. Again, let me scale that to better fit. I’m going to move that
23
there. And in the same way, I’m just going to put a drop shadow, which you’ll see more
24
prominently on here. For example, if I put this down to zero, there’s nothing and there’s
25
one hundred. You see that I’m going to move the softness so it’s not so hard here. It’s
26
hard around the edge. There is very soft. I move it somewhere around here and the distance
27
I can move it to be like this. So now if I play that, it looks something like this. Nice.
28
And the other things this is going to be your savior inside when you edit with I, I’m
29
going to do a dissolve. So cross dissolve. Let’s just fade that in and fade it out. Fade
30
in, fade out. Now you’ll see that I’ve used this on things like here. There’s a fade.
31
And also when we come to try and show the end of a scene like this and the passing of
32
time because we don’t go from this shot and then see Amy walk back to her table, we just
33
fade in, fade out and then cut into it. It shows the passing of time, perhaps that would
34
be missed because generating another shot of Amy walking to table would be quite difficult.
35
We don’t have it. It’s also not needed. It’s just wasted seconds. I want to keep the video
36
moving. So that’s what I do. Fade will be your absolute savior. Let’s look at that again.
37
OK, and then I’m going to do the same thing here. Let’s take a look. OK, perfect. The
38
only other text parts I have are when I also have their names coming in here. So I might
39
do Amy aged nine or something and then Amy age six, something like this. And then also
40
I need to make sure that these come in at the right time like this. I could also add
41
the drop shadow to make sure that I can do that twice and play with the top one to be
42
high opacity and a little bit of a softness to it. And I can do the same thing so that
43
they fit like this and have my text. So that’s all the text I’m going to do. I won’t do that
44
with you real time. The other thing I might want to do is some grading, just a real simple
45
grade. Now, these look like 1940 shots. And so does that. I might want to just change,
46
for example, this. Now, if I go in here and you’ll have this in every single editing software
47
you have, I like to play with the metric color, go into the creative. I might want to just
48
change the tint slightly more blue on this, slightly more blue. I can move down the saturation
49
and then the vibrance. I can move either up or down just slightly. You can start getting
50
a bit more of a match between this. Like I need to keep working on that, but that’s not
51
quite right. So now I can play with the color and I can start trying to make it look like
52
it’s more 1940s and faded. And this matches this. Make sure this is quite a yellow hue
53
and that’s also quite a yellow hue, but I think it’s a little bit too much. Move the
54
saturation down slightly. So now we go from this shot to this one. Now, the other thing,
55
of course, I want to do is get rid of this right here. I think, though, this shot can
56
be I twist it ever so slightly. So I think that actually the best bet for this is rather
57
than remove it, like I showed you earlier when we use the still image of it, if I come
58
into motion, I just want to have the position up to maximum. If I start with my scale just
59
a little bit like this and start like that. OK, it looks like I’m going to have to put
60
in the original. So let me find the original still image of that. OK, and now I want to
61
match up this. I’m just going to copy over. Or this. I don’t need to copy that, the color
62
balance and all the effects. Yeah, this is just what I need. I’m just replacing this.
63
Then you would never notice that I’m going to bring that up so it doesn’t affect the
64
side of the table. OK, just like that, I played with some of the sounds, but you might want
65
to move some of these up and down higher or lower, depending what it is that is happening
66
inside your own individual edit and story. So that was the titles I’ve played with the
67
grade and then the edit. So here’s a general rule. OK, try to keep people depending whether
68
you’re making this for social media, you need to get it really fast moving. If you’re making
69
a movie for, say, film festival like this, it can be slightly slower. But there are some
70
shots that are way too long in here and I’m going to shorten them. For example, I know
71
when the attack happens on Pearl Harbor with Amy. This is a really long shot. Let me just
72
turn that down slightly. I’ll turn the volume off. This, she walks over, has a look. Explosion.
73
And then keeps going. We could cut there. We don’t need to keep doing more and more
74
of this. We could cut right there. And what I’ll do and then go back to Amy’s shot right
75
here. And then this is also a very long shot, so I don’t need all of this. So I’ll go through
76
and I’ll cut that up together and then you’re going to see the other parts that I’ve got
77
is upscaling this and you’ll see the final edit. I’m going to put it as its own video
78
at the end of this and you’ll see the final one that I’m going to upload to film festival.
79
So that’s what I would suggest with edit. Think about NaiveLade all these down the timing
80
of your edit, cuts of your shot using transitions like fades. If you need to show time or get
81
from one shot to another without having the movement for it, the grade, make sure things
82
are matching each other. That’s very easy to. I know you’ve used style inside your images,
83
but when you go into runway or wherever and you make a video, sometimes the color grade
84
slightly changes. So go in and grade text. Make sure you add and make your text feel
85
like it should for the movie. You’ve seen the fonts I’ve been using and things. Make
86
sure it feels and connotes the right thing. The connotations of that font are telling
87
your story. Also, if you need to add captions or anything, we’ve shown you that in the previous
88
edits and get this together. Move your sounds up and down. Think about music that we’ve
89
been adding in and it’s all come together nicely.
— Topaz AI Upscaling: Sharpening Your Visuals —
1
Now, on to upscaling.
2
This will not apply to everyone and it
3
really will depend on a number of factors,
4
give it price or project, etc.
5
So, upscaling or upresing is exactly what it
6
sounds like.
7
And we have spoke about this software, it’s
8
called Topaz.
9
We have talked about it a little bit,
10
but I’m going to actually use the software
11
now.
12
We’ve done upscaling on our images when we
13
have created with, say, Mid Journey, for example.
14
You remember here when you can upscale right
15
there to upscale and get a better image.
16
But then when we put it into Runway,
17
we noticed that the upscaling didn’t really carry
18
through, but it did somewhat, but not as
19
much.
20
Now, you can do upscaling here inside Topaz
21
or any other tools.
22
Topaz is a great piece of software.
23
I can actually show you the site here,
24
bring it up for you.
25
So, Topaz is a great bit of software
26
they’re partnered with or they were created by
27
the same people that do Gigapixel, who are
28
great for upresing images.
29
And they’ve also upresed video now with Topaz,
30
as well as image and things.
31
So, the main points to think about here
32
is, do you need it?
33
If you’re uploading just to YouTube, someone’s going
34
to watch on a phone screen, it’s just
35
for fun, then no, probably not.
36
But if you want to do something like
37
the project I’m doing, where it may be
38
blown up quite big to be viewed on
39
screen, say a festival, or you just want
40
the best of the best, or you’re doing
41
a client’s project and you want it to
42
be super crisp, then yes, you probably do
43
want to use an upscaler like Topaz.
44
There is also a cost involved, obviously.
45
Now, the prices are going to vary.
46
There are sales on as we speak, and
47
you can get all three.
48
I’ll show you, actually.
49
You can get all three here for $299.
50
You can get the photo, video, and Gigapixel
51
upscaler.
52
But also, if I go on to video
53
right here, that’s currently at $299.
54
That’s a one-year subscription.
55
You can cancel the subscription so it doesn’t
56
renew, but for a year, $299 right now.
57
So, there definitely is a cost involved, and
58
you need to think whether you need this
59
or not, and for the amount you get
60
from it.
61
We’re talking about an extra 10% in
62
quality maybe, sometimes 15%, 20%, which is huge,
63
but a small amount of quality difference here.
64
But you might want it for your project.
65
So, I’m going to show you.
66
I’ll show you this, but just bear that
67
in mind, okay?
68
So, I’ve put in my clip right here.
69
This was that shot.
70
If you remember, this shot right here, the
71
establishing one of Pearl Harbor when we go
72
across.
73
Now, it’s a nice enough shot.
74
It’s not very crisp.
75
The movement is not crazy smooth.
76
A little bit of jumping, and the lines,
77
for example, if I look around this wall
78
here, are slightly soft.
79
Some of these lines around the window, slightly
80
soft.
81
Probably for the type of image I’m doing,
82
this 1940s, 50s style, it doesn’t have to
83
be super crisp, but I want it just
84
a little bit better than this, and this
85
is how we do it.
86
So, I’ve got that shot.
87
I’ve just drag and dropped it inside here.
88
This is Topaz Video AI5 right now, and
89
I’ll show you the settings you can go
90
through.
91
Drag and drop that in.
92
You come up with a screen somewhat like
93
this.
94
So, we’ll start from the very top, shall
95
we?
96
You can do preset, and you can say,
97
I want to do four times in slow
98
motion, eight times, et cetera.
99
So, we’re going to make this a much
100
better quality image, but I like to be
101
a little bit more in control right here
102
of what we’re doing.
103
Just take note that right now, this is
104
not 2K or 4K.
105
It’s a strange resolution that comes out with
106
the images, 1280 by 768.
107
That’s not a very good resolution, obviously.
108
So, we want to make that better.
109
I can simply go to the dropdown here
110
on enhancement, and I can say, you can
111
two times scale this, and we get to
112
two by five, three, six, or I can
113
do four times, and it tells me it’s
114
going to be 5,000, which makes it
115
bigger than if we were going to use
116
a 4K image.
117
That’s bigger, and bigger with regards that is
118
better, I think, because if I’m going to
119
use 4K when I come to my edit
120
software, I can shrink this down manually, and
121
I still have all the image density that
122
we get with that many pixels.
123
So, I like to do that.
124
That’s my preference.
125
Now, codec we’ll come to in a bit.
126
I can tell this to H.264, which
127
is pretty much what I want to edit
128
with, and then I’ll export with that.
129
I can also say ProRes, which is pretty
130
nice.
131
Maybe I’ll go with ProRes for this one.
132
So, adjustments.
133
Now, this is the video type.
134
There’s lots of difference, and you can read
135
all about the different ones specializing in enhancing
136
faces.
137
The default is this one right here you
138
come to, which is enhancement for most videos.
139
The best one right here is Rhea, that’s
140
advanced 4K upscaling.
141
There are other ones on here to denoise
142
and sharpen animation.
143
This is the best that I’m going to
144
use this.
145
But please be aware that if your computer
146
doesn’t have an amazing RAM, it’s not a
147
huge capacity machine, then it may take forever
148
or crash doing it.
149
It takes a lot to do this.
150
I’m on a Mac with an M1 or
151
M2 chip, I can’t remember, and I can
152
do it and it can process.
153
This clip will process for this length of
154
time.
155
What are we in here?
156
This long, 10, 15 seconds or so.
157
Oh, right here, this long right here.
158
It took about 10 minutes.
159
I’ve done this before in Rhea, so not
160
too bad for my model of machine, but
161
please be aware of what you’re editing on.
162
I’m going to choose this.
163
We can take this off, although it doesn’t
164
allow you to do it anyway.
165
This is you can see them side by
166
side live, but what I’ve realized is you
167
don’t actually really get to see.
168
It’s not too much in real time.
169
You’re better off going through this, exporting it,
170
and viewing them side by side.
171
I’ll keep going down here.
172
I don’t want to add any noise.
173
If you do want to add noise, I
174
would fully suggest you just do that in
175
post-production in your edit.
176
Not here.
177
Let’s get our cleanest best image that we
178
can.
179
I don’t need to fix any focus.
180
There are no focus issues.
181
This is, of course, if you were filming
182
using your own footage and you wanted to
183
upscale and up-res it and make it
184
amazing, and you had a bit of a
185
focus issue, it can help with that.
186
We’re not going to have that with AI
187
image.
188
No noise, no grain.
189
I’m using Rhea, and it gives you a
190
warning here.
191
It has a lot of RAM and GPU
192
resources.
193
Do you have it?
194
Yeah, okay.
195
I leave all this on, and I keep
196
it manual.
197
Now, frame rate, the original is 24 frames,
198
and that’s what you’ll want mostly with movies.
199
Maybe you wanted to have 60 frames because
200
you want to slow this down or something.
201
Then you can, but keep it at 24
202
frames a second.
203
I’m going to keep it on Apollo, which
204
is the one I like to use here.
205
I’m not having any slow motion like we’ve
206
just said.
207
Now, this is quite interesting.
208
If I hover over here, so interpolated frames.
209
I won’t go too much into depth with
210
this, but when you have an AI video,
211
what it does is it might tell you
212
it’s 24 frames per second, but it’s not.
213
Sometimes it duplicates the frame to the next
214
one, so you might end up having more
215
like 12, 80.
216
You may have a different amount of frames
217
that make it look like a moving great
218
image, but it’s not as smooth as if
219
it was an actual video because there aren’t
220
really 24 frames in there.
221
If I put duplicate frames and replace, what
222
it’s going to do is it’ll detect if
223
frame one and frame two are the same
224
and then frame three is different.
225
For example, it’ll replace frame two with its
226
own frame that’s in between frame one and
227
three.
228
So you’re making new frames to make the
229
video even smoother, which is absolutely incredible.
230
So yes, I click this on.
231
Sensitivity default is 10.
232
I pretty much leave it at 10, maybe
233
13.
234
We any stabilization, anything like that, just make
235
sure my codec’s fine.
236
That’s what I use for this, and I
237
click export.
238
I don’t need to do that for you
239
now because it takes up a lot of
240
RAM.
241
If I did that and then I was
242
trying to do something else at the same
243
time, I wouldn’t be able to show you.
244
I’ve already exported this video, gone through this
245
exact process with this exact setup, and we
246
can have a look and see what they
247
look like side by side.
248
So here are my two shots.
249
Here is the one that we just took
250
out from inside Topaz, and here is the
251
original one here.
252
Kind of hard for you to see on
253
a screen.
254
You might not see as much detail as
255
I do on this recording.
256
I apologize for that.
257
I’ll show you as much as I can
258
here.
259
Best to have these side by side to
260
look at, and you’ll notice that there’s some
261
cropping right there.
262
That was a setting inside here that I
263
can choose how I want it to be
264
when I choose my output resolution.
265
So this is true 4K right here, and
266
this is that strange 1287, whatever it was,
267
that we exported from our AI model.
268
So if I just go along and play
269
this for you, you’re going to see a
270
very small amount of difference, but that’s very
271
fluid.
272
You might not even see the difference in
273
here, but let me just show you, and
274
there’s slightly less fluidity.
275
But again, you’re not going to notice too
276
much from a drone shot like this.
277
If a person was moving, speaking, hands, etc,
278
then you may do.
279
I think I need to show you if
280
I go right about here, let’s have a
281
look at that, and I’ll do the same
282
thing right here.
283
So if we have a look at this
284
wall here, you see that a little bit
285
blurred, especially fuzzy around here.
286
Let’s go to our other image, a clear
287
straight line and clear and crisp here.
288
Have a look at this window lines.
289
I like to look at windows quite often
290
in this right here, crispiness.
291
Let’s have a look here.
292
So slightly softer, slightly softer here, and definitely
293
slightly more crisp.
294
I think I need to find the exact
295
same shot here.
296
This line, nice and crisp with these poles
297
coming up, definitely less crisp and softer here.
298
These lines are nice, straight, not blurred.
299
These slightly softer, just look fuzzy on the
300
edges.
301
You can compare that one there.
302
That’s a good one.
303
Let me put these closer together here and
304
here.
305
So that’s the amount, oh, definitely on here.
306
Look at the white being popped out from
307
there on the crispiness for that.
308
So that’s definitely, that’s the kind of level
309
we’re going with here.
310
When you see detail on someone’s face, you’ll
311
definitely see it also, but that’s the kind
312
of level you’re getting with uprising.
313
Once again, when you upscale, think, do I
314
need it for what I’m creating?
315
Is it worth it?
316
How many projects am I doing?
317
Perhaps divide the cost by project.
318
And then is it going to be worth
319
it to be able to do that?
320
Now, what I’m going to do here is
321
I’m going to actually batch upload.
322
I will upload loads of my clips, all
323
of my video clips that I have.
324
If I go back to my window right
325
here, this is where I’ve been storing.
326
That looks ugly.
327
These are all my clips that I have
328
here.
329
These are the names for them that are
330
linked on my timeline in Premiere Pro.
331
So what I’m going to do is in
332
my edit suite, I’m just going to highlight
333
them all, offline them.
334
I’m going to go through here and I’m
335
going to do each individual one as a
336
batch.
337
So it uploads them, exports them all under
338
the same name as they originally called into
339
a new folder.
340
Then in my edit software, I will just
341
reconnect them, but I’ll reconnect them to the
342
new versions, the new upscaled versions.
343
And then all my timeline will be exactly
344
the same, the same movements, the same everything.
345
It’ll just be the nice new crisp images
346
having exported it from here inside Topaz.
— The Course Project Final Video & Finding Festivals —
1
So, the last lecture, I’m just going to say goodbye to you. I’m not going to waste any
2
more of your time you can get on making your AI video. I have submitted my video to Festival.
3
I submitted to this, the AIFF, the Runways Own AI Film Festival, the third one they’ve
4
been running here. I’ve submitted. It was very easy. Submit film. Mention about what
5
tools you’ve used, what AI tools you’ve used, etc. And then there are other prizes. Who
6
knows if I’ll get selected or if there’ll be a prize or whatever. I’ll let you know.
7
But I’ve submitted it to this. And also this one came across my emails. This one here that
8
I have the Project Odyssey. This is another AI film festival I submitted here. And I’m
9
going to let you know that if you search Film Freeway, now Film Freeway by itself is kind
10
of the platform that’s used. You upload your video to there, fill out all the details,
11
and then you can submit to festivals through there. So if I just search Film Freeway Festival
12
AI category, I can see that there’s lots of ones that have AI categories here. So I could
13
just submit now. And if my video is already uploaded into the Film Freeway platform, it’s
14
very easy to submit. Some cost, some don’t, or there are different fees and things. So
15
I’m going to be putting it in a few festivals. And I’m going to do that with various videos,
16
I think over the next years. I’ll be putting it to that because I think it’s a growing
17
and very, very exciting space. So I’m glad that festivals are taking this on and getting
18
AI video in there. So I guess I’ll show you now. You’re going to see all the way from
19
beginning to end where we went from an idea, got ourselves a script, we did a mood board,
20
and then we started getting some shots together in a storyboard, putting these together, and
21
I had to move things around, how I added some sound, some sound effects, some music, and
22
some titles we researched, put it all together. You’ll see the final now where I cut things
23
up, especially at the end between the two Amy’s going back and forward between their
24
scenes as the climax of the video, if you like, happens. I’ll play that for you now.
25
And then that’ll be the end of the course, except, of course, update videos that are
26
coming regularly. Okay. See you soon.
27
Okay, little lady, come on over and say goodbye to your dad, heading to work. You can watch
28
me from the window, okay? Now be good for your mom and I’ll see you later. Why don’t
29
you draw me a nice picture of us, me, you and your mom, and I can look at it when I
30
In fields where livings used to sway, the echoes of silence now hold sway, a soldier’s
31
song left in disarray, from war-torn nights to a broken day, the letters sent from distant
32
lines, tear-stained words in trembling hands, a loved one’s pride now caught in strands,
33
of memories spun like shifting sands, oh, the tears that fall like rainy skies, tear-stained
34
words in trembling hands.

