Thursday, March 31, 2011

Bally Roller and the Order of the Edits

The most strained post title ever? Perhaps. I ran out of "Die Hard" movies, so I'm moving over to "Harry Potter." I don't know what I'll do when it comes time to make my eighth video. "Friday the 13th" movies, maybe? Not enough franchises made it past seven entries.

Anyway, I now have a very basic form of cutting implemented! Let's go to the tape:


In this video, I'm first showing just the ball rolling down the simple slopes, with the red camera tracking it; then, I release both the simple ball and the pachinko ball, and allow a camera to follow each one. The green camera is also set to cut more often than the red camera. Pachinko is just more exciting to watch, I guess.

Since I wanted to start with very simple editing, the cameras currently only cut back and forth between two shots: a tracking shot from behind the target, and a stationary shot which pans with the target. Here is a diagram of the behavior tree that the cameras are implementing:


That may be confusing, so I made a simpler version as well, with more words and less cryptic arrows. Also, more bright colors, and harder-to-read text. Let's take a look at that version:


Hopefully that one will help. Basically, the camera has two branches for whether it's currently engaged with an event or not (originally I was going to have the camera wander aimlessly when not engaged, but I disabled that for the video above). When it is engaged, it enters a shot idiom, which is a sequence of commands for handling a certain type of event. In this case, there's only one idiom: cut to a tracking shot behind the target, wait a certain amount of time, cut to a stationary side shot, wait, and repeat (until no longer engaged). Having shot idioms also helps deal with issues like the 180-degree rule (I never made a post about Editing Rules, did I? Don't worry, I will), since we can make sure that the next shot doesn't violate anything with respect to the previous one.

Obviously this is just the foundation for the cutting system, but behavior trees make it easier to build off of this. The next step will be to implement a wider variety of shot choices (establishing shots, reverse tracking shots, aerial shots, etc). After that, I will implement more idioms for handling different types of events (a dialogue scene between two characters is probably the most important one). The shot lengths will be variable eventually too, so that cuts don't happen so regularly.

Anyway, beta reviews are coming up pretty fast. I'm planning on having more shot choices added by then, and I'm going to play around with more type of events. I'm going to try to have multiple idioms as well, but they probably won't be as complex as dialogue or crowd scenes. Still, we'll see. Stay tuned for "Bally Roller and the Half-Shot Idiom" next week!

Thursday, March 24, 2011

Self-Evaluation

Time for a flashback to my original proposal and design document! How have I been doing so far?

I think, as happens often, I underestimated how long some things would take: Specifically, I didn't expect camera placement and tracking to be as complicated. On my Gantt chart, I actually planned for tracking in more complex environments to be completed today, and that did sort of happen with the Pachinko machine; however, this has been at the cost of the cutting-between-different-shots system, which I am only now working on. But I do think that holding off on cutting was the right decision: I want to make sure that individual shots are handled well before I move on to multiple shots.

On the other hand, some components turned out to be simpler to do than I was expecting. This didn't happen as often as I would like, but I'll take what I can get! The most important feature was that having multiple cameras (and multiple viewports) ended up being decently simple to implement—when I first started, and didn't really know Unity well, I was worried that this would be a pain. I am glad that it wasn't. While my original idea was to have cameras being added and removed as events occurred, I'm thinking now that it would make things unnecessarily complex, and the GUI would be really confusing if viewports kept popping in and out. So for now, I'm just working with 3 cameras at all times.

I like making lists. Let's make a couple:

Things I've done:
  • A mouse/keyboard camera system that mimics Maya or Unity's Scene view (allowing the user to pan, rotate, and zoom at will)
  • A GUI which displays the viewports of multiple cameras within the scene
  • Keyboard commands for the user to cut between each camera on the Master Track viewport
  • The ability for objects to determine when they are involved in an event
  • Visual representations of cameras within the scene
  • Cameras that automatically choose a suitable (not in collision with geometry, and with the target unoccluded) initial position when filming an event
  • Cameras that can follow an object through complex environments without colliding with the environment
  • Cameras that can follow an object and keep it visible, without resorting to locking it in the center of the frame
  • A Director which assigns the cameras to cover different events based on whether they are already engaged
  • Very basic behavior trees for the cameras and Director

Things to do:
  • Implement cutting to different shots (obeying editing laws like the 180-degree rule)
  • Behavior trees for different types of situations (crowd scenes, dialogues, etc)
  • Priorities for events
  • Allow user to set constraints on cameras
  • Implement genres for cameras (action movie, romantic comedy, documentary, etc)
  • Figure out how to export video footage from the Master Track
  • Continue improving camera movement

So let's talk milestones. The most important remaining objective is getting the cameras to cut between different shots. Once that's working, I've pretty much accomplished the most basic idea of my project: cameras that automatically make a "virtual documentary" of the events around them. Obviously it won't be an interesting documentary, since the shot choices won't be that exciting, but it will still be a documentary. For the upcoming Beta Review, I'm hoping to have camera cutting implemented (along with constraints on some cinematography maxims like the 180-degree rule, the 30-degree rule, and so on).

After that, it will be time to actually make things visually interesting: more advanced behavior trees so that the cameras can react to different types of (more complicated) situations. The priority will obviously be events with multiple actors involved, like handling dialogue scenes with a shot/reverse-shot idiom. So by the time of final presentations, the cameras should use behavior trees to create much more stylish movies—users should also be able to select the cameras and set certain options like genres. Finally, a user should also be able to export their Master Track and watch the actual movie that was produced by the combination of the cameras.

Assuming everything else is completely 100% four-stars Oscar-winningly perfect by that point, remaining time will be used for adding more complex idioms, crazier genres (a high-contrast film noir one would be the best), and other fancier things.

So there's my to-do list, my did-do list, and my major upcoming milestones. Fun (kind-of-scary) stuff!

Live Free or Roll Hard (AKA: The Pachinko Movie)

Since last week, I've done some further tweaking of the tracking algorithm, so the cameras are doing a decent job of following their targets. I didn't have a movie last time, so let's start off with some video footage! Notice how the cameras try to keep the target within the center of the frame without abruptly locking in; they also move in and out a bit to stay on course while avoiding obstacles. It's kind of working like a spring system, with a desired orientation towards the target and a desired position behind it—the desired position can also be put on the side, for instance, to track alongside the ball.


While the improved camera tracking is pretty visible in the video, most of what I worked on in the past week was under-the-hood: specifically, I've been making the code more general (to make more complex additions easier to implement), and gearing the cameras up for cutting to different angles.

The major change is that simple behavior trees are now involved. Just importing all the right assets and files into my Unity project ended up being more of a hassle than I was expecting, but I got that done. There's now an actual Director too, who is currently just using a behavior tree when deciding which camera to assign an event to—you can see this in the video above, where the Director assigns the first occurring event to the red camera, the next event to the green camera, and so on.

The second major change (which, again, you can't see in the video) is that I redid a lot of the event and camera code, to make everything much more general: now you just tell the spheres what constitutes an event for them (in this case, it's being mobile), and they just tell the Director when they are engaged in an event, and when that event ends (in this case, when they become stationary). The spheres don't have to deal with the cameras at all, and the Director does all the mediating.

Since I think tracking is pretty solid, and I've got the start of behavior trees, next week I will play around with having the cameras cut to different angles. Surprised at how short this post is? Stay tuned for my self-evaluation!

Thursday, March 17, 2011

Follow the Leader

Well, I'm back from break, and you know what that means, right?! Right? Hello? You're going to have to yell louder at your computer screen—I can barely hear you.

What it means is that it's time for more updates. The major thing that I've been working on is getting camera navigation to work well in complex environments, because cutting to different angles won't matter much if the individual shots are incoherent. To work this stuff out, I built a sort of pachinko machine for a ball to roll down. I also gave it walls and a ceiling, to force the camera to track the ball between the pins—no cheating with shots from the outside, like I was doing with the last ball rolling down a hill. Here's the view from the top of the slope:


The first problem became clear right away: the camera was cutting to a position above the entire thing, unable to figure out how to get inside. Here's a shot of the red camera getting a spectacular view of... a gray ceiling.


This is a major issue: even if tracking is working fine, if the initial choice of camera angle is poor, the entire shot will be messed up. I didn't want to just have the initial shot be some offset from the target, because the camera could end up inside another object, or the target could become occluded. It also wouldn't make sense to try and just find a position for the camera where the target is visible, because there are infinite shot choices.

For now, I ended up having the camera start at the position of the target, making the camera into a rigid body, and adding a force to push it off in one direction to determine it's initial position (before it starts following the target). That way, the camera can't end up inside another object, because it will bounce off anything it hits; the target should also start out unoccluded, since the camera started at the target's position. I'm trying to figure out how to handle the collisions exactly, since right now the camera is capable of bouncing other objects away too, which is not good. But for now, this works for determining an initial position.


Now the camera is at least starting inside the pachinko machine, so that's good! It has it's own collision detection, so it won't go through the pegs while using a tweaked version of the SmoothFollow script in Unity to follow the ball. However, while it continues to keep the ball in the center of its sights, it sometimes circles all the way around it, which gets really dizzying. Having the ball locked into the center of its view is also really jarring, especially when the ball makes sudden stops. Here's the camera shooting the ball from the wrong side:


What I ended up doing was adding a way to change which side of the target the camera should be tracking on: you can track from behind, from the side, from the front (so the camera is moving backwards), or any angle in between. I pretty much redid most of the follow script too, so that the camera is more free to avoid obstacles, while constantly trying to get back to a certain position with respect to the ball (in front, behind, etc). It's sort of working like the 180 degree rule, where the camera just stays on one side of the ball, and will only rotate around to the other side if the ball reverses direction. Here's a shot of the camera slaloming through the pegs to follow the ball:


I also tested out multiple cameras on the same object (which probably wouldn't really happen, since one camera would just be cutting between these 3 different angles). Here's the usual Ball Rolling Down A Hill, but with 3 cameras tracking along it—one behind, one in front, and one on the side. The reason that the blue camera is at more of an angle is because it was tracking behind, and had to spin around once the ball hit the wall. I'm working on getting the cameras to get closer and further from their targets as well, since that will help to avoid collisions while keeping them in view.


(Sorry for all the screenshots: I actually just discovered that my Camtasia trial version expired, so I'm going to figure out a new way to make videos.)

Anyway! Next week, it's behavior tree time: since the cameras are decent at handling single shots (though obviously I'm going to continue fine-tuning), I'm going to get them cutting between different angles. To get started with behavior trees, I'm going to make it very simple—It will probably just be things like cutting between establishing shots, tracking shots, and close-ups. But it will give me something to build off of.

~ Fin ~

Thursday, March 3, 2011

Coming Attractions

I had a pretty busy week of projects and midterms this week, so you'll have to wait for the world premiere of the next installment of the "Roll Hard" series.

Previously on "Intelligent Camera Control":
Last week we had alpha reviews, and I got some great feedback, so thanks to everyone on the review panel! I've updated my Design Documents to reflect the changes to my approach (primarily the fact that behavior trees are a bigger part of my project than I originally envisioned, because I didn't know they existed before), so you can check that out below.


Rated 'G' for "Genres":
During the Q/A portion of the alpha review, I talked more about "camera genres." Originally I was thinking of this as a pie-in-the-sky future addition to the project, and it's still something that I'll work on after I get everything else working, but it's definitely something I want to do. The idea is that you can set individual cameras to film in certain genres, which will change their behavior—for instance, an Action camera would cut often and have lots of shaky-cam, while a Romantic Comedy camera would have longer takes and use soft focus. To implement this, I'm thinking that the cameras will have variables for different traits that go from 0 to 1—they'll be traits like cutting speed (long takes vs. quick cuts), mobility (stationary vs. hyper), focus (deep vs. shallow), and so on. Within the idioms of the behavior tree, the camera will use these values when making decisions. Then the genres will just weigh things differently. Of course, maybe certain genres can have certain specific traits—a Film Noir camera that shoots in high-contrast black and white, maybe?

Give a Hoot, Read a Book:
Another major change to my Design Document is the addition of a new reference: "Real Time Cameras," by Mark Haigh-Hutchinson (thanks, Joe!). While this book focuses mainly on video games (so it involves much more input by players than I'm working with, and also focuses on specific things like cameras for 2D games and first person shooters), it seems like it will be extremely handy for getting a camera to follow events while avoiding obstacles and keeping the event unoccluded. It includes pseudocode too, which is always nice! Plus, it's all about camera control in real time, which already makes it more useful than many of the other papers I'm using as references. I'm currently reading through it, so I'll post more updates as I come across more useful things.

Next time on "Intelligent Camera Control":
Currently cameras are just using tweaked versions of Smooth Follow and Smooth Look At to film events. I've been building a more complex environment of tunnels, turns, and obstacles, so my next goal is to get the cameras to handle mobile events better: this involves preventing the camera from colliding with obstacles, keeping the target event unoccluded, and figuring out the best path to take. I'm going to read through "Real Time Cameras," and take another look at some other papers—"Visibility Transition Planning for Dynamic Camera Control" ([OSTG09]) seems like it will be useful, since it's all about moving a camera while keeping a target in view. Once cameras can handle events in single shots properly, I can finally move on to cutting and idioms!

Anyway, Spring Break is next week, so updates will slow down for a bit. But after that... Well, I don't want to overhype "Live Free or Roll Hard," but let's just say that it will be the most amazing video you've ever seen of a sphere rolling down a hill through a curvy tunnel into a wall. Probably.