GoPro Fusion is here!

Since I first heard about the Fusion many months ago, and then when I actually got to see one in person at the GoPro 2017 Launch Event, I have been extremely excited to get my hands on one, and see how useful it really is.  Well, a few weeks ago, I got that chance and I’ve been playing around with it ever since.  It’s definitely different than any other camera I’ve used in the past, because it has virtually no actual controls, and you just put in where you want it and walk away.

I have had some time to play with it, and get a feel for it and what it can do, where it’s strengths and weaknesses are, and how I might actually use something like this in my production workflow.

As a GoPro Replacement

My initial impressions were a bit disappointing, but that is only because I had hoped I could use this as another camera in multicam shoots, which I could choose my shot after the fact, giving me quite a bit of flexibility.  While this would be a really cool option, it doesn’t really work, because the resolution is spread over a 360 degree spherical image, which, when you try to extract a usable frame, is not as high of resolution as you would need to match other footage.  For very wide shots, it is usable for that application, but in most situations it really wouldn’t be the best option.  Add to that the very time consuming process of stitching the frames, and this doesn’t really fit into most workflows.

Alternative Production Uses

However, this doesn’t mean that you couldn’t use this treated like two opposite facing cameras, without stitching , and just having to run a lens distortion removal filter, in a situation, where you may have the camera facing the presenter, and the other angle to capture audience reaction.  So, while certainly not what the camera was intended for, I can see some situations where I will likely use it for exactly that.   Of course you could just do the same thing with a couple of GoPro sessions, but this gives you the capability, with the option of stitching if necessary, if for example the speaker decided to leave frame, in which case, you could follow his action.  I’m not saying it is practical, I’m just saying these things are possible, which is something to think about when you are dealing with a camera that captures 360 degrees of motion, which you can actually use for other applications than 360 VR applications.

Environment Mapping

Now, if you work in the world of 3D or visual effects, this could be a great new tool for creating environment mapping for metallic objects, or for using as sphere mapped dome projection backgrounds.  The advantages to use this 360 degree camera over a traditional capture of a chrome ball, is that, it can capture action, it does the stitching for you, and you don’t have to paint the camera out of the frame.  So, there is a lot of potential for this in those environments.

Actual Experiences

My first attempt to film anything other than basic testing, was filming kids decorating a large Christmas tree on the stage of a domed ampitheater, which I thought would look great.  But, the environment was poorly lit, and the quality of the video wasn’t something I’d want to show people if I was trying to show it off.

For my next test I used one of my car mounts, and attached the camera to the top of my car and drove down one of the coolest streets for Christmas lights in the Denver area, which is downtown Littleton, where they use so many lights on the trees, it looks almost like dense spider webs of lights.  With this test, I went down twice, once with the camera facing forward, and once sideways, to see which one looked better, since presumably there would be greater resolution concentration on the center of each of the lenses.  Except for the suction cup car mount, there were no visible seams after stitching, and honestly, one didn’t really look better than the other.  This footage turned out much better, with the only thing I didn’t like as much, being the motion blur of the lights, which was most obvious if you paused the image.  While it is really cool to watch the footage back where you can spin the camera around to look at whatever you want, which I like, the resolution still doesn’t compete with a regular camera.  So, you gain interactivity, at the expense of resolution.

My third test was at The Denver Zoo Lights, which is basically a walk through the zoo which has a ton of Christmas lights and animated light displays at the zoo at night.  It was cold and snowy, and I didn’t expect to get good footage, but I was actually pretty pleased with the footage I captured.  Not wanting to hold the camera the whole time, I strapped it to my GoPro backpack and extended the tripod above my head.  This was actually perfect with one exception.   I kept turning to look back at the people with me (in particular, my son), and every time I did that, it turned the camera too, which is a little disorienting if you are watching it back in 360 mode.   I used the beta stabilization feature of Fusion Studio, which corrected this, but this looked weird because of the motion blur caused by the low light settings; so while it no longer turned with me, lights which were crisp suddenly became streaks which made sense when the camera was turning around, but just looked odd when the image is stable.  So, I guess the lesson here, is to make sure to keep the camera facing forward while filming.

Post Production Workflow 

My focus was on post production, so I used Fusion Studio for working with the footage.  You have two options when working with Fusion Studio, either hook up the camera, in which you have to keep the camera attached while working with the footage, or as I preferred, pulling the two cards, and copying the content to a common directory and using the add media function to point to the folder containing the copied media.  From here, you have two options on how to deal with the footage, either

Processing Time

I have a very fast Intel i9 7900 processor with 48GB of RAM, which on speed tests rates in the 99th percentile for computers.  It isn’t the fastest computer out there, but it is near the top.  So, the results I had, are probably at least as good as what you will experience, and maybe a lot better depending on your setup, so keep that in mind when considering these figures.  In the process of stitching files, it was taking between four and eight times real-time to stitch the files.  It was entirely CPU based, so it won’t matter what GPU you have, it won’t give you any benefit.  So, if you have a forty minute clip, which you want to stitch all of, it will probably take between 2.66 and 5.33 hours to stitch.  To give you some perspective, the fastest MacBook Pro would probably take somewhere between six and twelve hours to stitch that clip.  More realistically, you are more likely to do a two minute clip, which would still take eight to sixteen minutes on my machine, and between eighteen and thirty six minutes on the fastest MacBook Pro.  That is a lot of time for just a two minute clip.  The reason I am not being more precise, is because my first tests took almost ten times real-time, but they’ve tweaked some things since then.  More recently, I was able to stitch at 4 to 4.5 times realtime with version 1.0, but version 1.1 which has more advanced features is consistently rendering at 6.2 to 7.6 times real-time.  My numbers for the Macbook Pro times are based on CPU benchmarks tests (specifically Passmark), which clocks the fastest CPU available for the Macbook Pro (i7 7820 HQ or 7920 HQ depending what site you checked) at 9443 and 10,136, verses the 22,581 rating of the i9 7900X.  I’ve found these benchmarks to be pretty accurate with regard to estimating render performance differences.

So, this is not meant for long format projects, but rather for clips.  You will want to choose your clips carefully before deciding what to actually export.  But, this only applies to stuff you want to have a really good stitch on, which certainly isn’t the only use for the Fusion.

Update:  OverCapture and Real-Time Preview

I recently picked up a 2nd Generation iPad Pro 12.9″, which is one of the devices capable of doing OverCapture with the Fusion.  My experiences with this, changed everything about how I can work with the Fusion, if only because I can actually see what I am shooting, which is a pretty big deal.