Wednesday, January 2, 2008
Interview with Gene Alexander, MaMoCa
Our interview today is with Gene Alexander, CEO of MaMoCa (www.mamoca.com), an angel funded company based in San Clemente which is developing motion capture technology for the animation and video game market. The firm recently scored an unnanounced, additional round of angel funding from the Tech Coast Angels last month, so we thought we'd look in on the company and what it is doing. Ben Kuo spoke with Gene before the holidays.
What's your technology, what's the need for what you offer?
Gene Alexander: MaMoCa comes from Markerless Motion Capture. Essentially, what we are doing is assembling 3D content from the emotion of performers, without encumbrances on the actors. It's of interest to people making video games, videos, and animated TV shows directly from performances. We allow you to drive animated characters directly from actors.
Where did you get the idea for the company from?
Gene Alexander: I used to be up at Stanford, teaching biomechanical engineering. I was interested in analyzing how people move, in order to design orthopedic devices. At my previous company, Imaging Therapeutics, we were using marker motor data plus MRI models of knees and hip joins, along with deeply detailed subject specific motion information, in order to design better knee replacements. We spent a lot of time up there trying to abstract as much information as we could from a limited amount of data. With markered motion capture, for a whole body, you only get 50 to 100 data points. It turned out very hard to establish a system with that little amount of data. So, I started working on different approaches to bring data to the problem to perform estimates, including cooking up spandex suits on people. We were never able to get close to the resolution needed for our MRI models--with motion capture we only got centimeters resolution, whereas MRI was at the sub-millimeter range. So, after leaving Stanford, I took some time off, and came at the problem from a fresh perspective.
How far is the technology, is it in use yet?
Gene Alexander: We've actually been working on it for two years. The first year was mostly on paper -- using friends and family funding for patent applications. We now have a prototype of our first imaging product, and a prototype of the first software. We're able to query data in limited volume, to capture hand movement data. We're moving up to have a face model next, to do face driven animation.
Who would the typical customer be, and what would they use this for?
Gene Alexander: In the beginning we will be running a studio service, where we will set up a system in our studio, and they can bring people in on a daily or weekly basis to capture data. We will probably use it to get very, very realistic human movements--for video games, movie, or TV. For example--when you really want to see people's muscles flex, and catch a lot of emotion out of a face or out of people's hands. That's where it would be most applicable.
Let's talk about your funding and how the firm is backed--we understand you have received some funding from the Tech Coast Angels?
Gene Alexander: We received an initial angel round from the Tech Coast Angels back in February of this year. We just closed on a follow on round with the Tech Coast Angels at the beginning of December. Some of that has closed, some of it is still pending. We are bringing in about $450,000 in this second round. That will get us to spring or summer, then we'll be out for a venture capital round beginning in spring of 2008.
What's next in your development plans?
Gene Alexander: We're trying to get multi-pod, facial animation running this winter and in January/February. As soon as that is running, we're back into fundraising.
Have you run this past folks in the animation business, and have you gotten any feedback?
Gene Alexander: We've talked to both the game side, and the TV and movie animation side of things. The issue with game people in general is making sure they can get 60 frames per second. They like the facial animation, but it has to be workable at real time playback rates for video games. On the animation on the movie and TV side, they are interested in getting this facial capture and body capture, and also the ability to have a lightweight footprint when shooting live video. One example might be from the movie I Am Legend - there are a lot of creatures that were put in there after the fact, matted in. They want to be able to do live video and motion capture simultaneously, something we'll be able to do. I think it will be a real good niche for us.