Blog | 草莓视频在线/blog/Fri, 07 Feb 2025 10:51:48 +0000en-GBSite-Server v@build.version@ (http://www.squarespace.com)Keeping Curious: Why R&D and Hackathons MatterSi芒n AdeoyeWed, 05 Feb 2025 10:47:47 +0000/blog/keeping-curious-why-rnd-and-hackathons-matter618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:67a1df378b5aca4f49e373caSi芒n Adeoye, one of our product managers, digs into how dedicated innovation time fuels creativity, problem-solving, and continuous learning.

We鈥檙e fortunate to have an incredible team of engineers building the future of connected spaces through OKO and the Connected Spaces Platform (CSP). As a Product Manager, I鈥檓 responsible for defining and prioritising the features we want to build, and then overseeing their development to ensure we鈥檙e meeting users' needs.

Working closely with the engineering team every day, I get to see their passion for experimenting, problem-solving, and pushing boundaries firsthand. That鈥檚 why I鈥檓 a big advocate for creating opportunities for innovation 鈥 not just to advance our products, but to give our engineers the space to explore new ideas. We enable this continuous innovation through dedicated R&D time and annual Hackathons. In this article, I鈥檒l dive into why both are essential to our culture of creativity and share insights from the engineers who make it happen. 

Research & Development

R&D is the lifeblood of innovation 鈥 it's where we discover solutions to problems we didn't even know we had! This is why during each sprint - a two week cycle focused on completing a set amount of development tasks - we allocate 10% (equivalent to one day) for developers to tackle an R&D project of their choosing. The project must satisfy at least one of the following:

  • It relates to the advancement of OKO or CSP in some way.

  • It allows engineers to upskill on their own or other clients' codebase/tech stack (we have multiple client teams working on Unreal Engine, Unity, Web, and Cloud Hosted Services).

Each engineer has full control over the scope of their R&D work 鈥 there鈥檚 no product manager hovering over their shoulder asking for estimates or tracking deliverables. If they ever need inspiration, they can turn to our dedicated R&D Slack channel, where team members share project ideas, collaborate on challenges, and give behind-the-scenes sneak peeks of their progress. 

Hackathons 

Hackathons have been a driving force for innovation since their inception in the late 1990s. The first-ever hackathon, organized by the project in 1999, was designed to bring developers together to collaborate on technical challenges. What began as a niche event quickly grew in popularity, and today, hackathons have become a cornerstone of creative problem-solving and innovation across industries. 

These events offer a structured, yet flexible environment where participants can tackle problems without the constraints of day-to-day work. By breaking down silos, accelerating development, and sparking breakthroughs, hackathons encourage experimentation and the kind of creative thinking that traditional settings often can鈥檛 provide. 

Understanding just how valuable they are for innovation and engagement, we host an annual themed hackathon. Over five days, participants can team up or work solo to build a working prototype that aligns with the given theme. As I write, we鈥檙e gearing up for one of these hackathons, and I鈥檓 excited to see the fresh ideas and solutions that will emerge from it!

While we keep the atmosphere relaxed, we do have some key criteria for the projects. They must either extend an existing feature, demonstrate a new feature, or showcase how our current features can be used in innovative ways. This ensures that the outcomes remain relevant to our broader product goals, while still leaving plenty of room for experimentation.

Once the five days are up, participants present their project to the wider product organisation, discussing any setbacks they may have experienced, learnings they have made, and ideas for iterating on it in the future. Although hackathons are often seen as being engineer-focused, we make a point to include people from all disciplines at 草莓视频在线 鈥 from QA to the art team. This diversity of skills and perspectives creates a unique collaborative space, offering employees the chance to work with people they wouldn鈥檛 typically interact with on a day-to-day basis.


Initiatives like R&D and Hackathons cultivate a culture of innovation and creativity, giving our engineers the autonomy to own a piece of work from ideation to implementation while incorporating their own personal development goals. It might seem counterintuitive to reallocate time from scoped product work to experiment uncharted territory, but it鈥檚 an investment that continues to pay off. Learnings from these projects have led to faster and more confident feature execution and completed projects can also find their way into OKO once they鈥檝e been officially signed off. 

But 诲辞苍鈥檛 just take my word for it, let鈥檚 find out straight from the source!

Tom Yehya - Engineer, OKO Unreal Engine Client Team 

鈥淚 really enjoy my R&D time as it allows me to get creative and explore technologies I don't typically work with. My first complete R&D project was adding a simple right-click context menu in the Unreal Editor Extensions to speed up the creation of Space Entities 鈥 the core of OKO spaces. It had been a big pain point for me having to swap tabs, drag & drop actors, and then convert them.

Although the project was relatively simple, it had a big impact on the user experience. Seeing it be well-received and successfully integrated into OKO felt like a major accomplishment.鈥  

As part of Tom's R&D time, he added a right-click context menu in the Unreal Editor Extensions, streamlining the creation of Space Entities in OKO and improving workflow efficiency.


Adrian Meredith - Specialist Engineer, OKO Web Client Team

鈥淪ome of the most pivotal features I鈥檝e worked on started as R&D experiments, with some taking up to five years to find their perfect application. For me, R&D is about pushing boundaries, challenging assumptions, and acting as insurance against stagnation. 

As a rendering lead, most of my projects have often been graphics-related, with my goal being to push the product in ways it鈥檚 never been done before in relation to 鈥済ameplay鈥. During the first hackathon, my team created a game where users would have to roll through an obstacle course in a giant ball and get to the top first! It was such a success that the following year we took it to the next level. Players would navigate a creepy forest in the mountains, trying to avoid being caught by zombies hunting them down.

The key to successful R&D is creating an environment where 'bad' ideas are not just acceptable, but necessary. Every revolutionary product started as someone's 'terrible' idea and in my experience, the only true failure isn't in pursuing an idea that doesn't work; it's in being too afraid to try. When you free yourself from the pressure of immediate success and embrace the joy of exploration, that's where the real fun begins鈥

Adrian鈥檚 2024 hackthon project 鈥楨scape The Dark Forest鈥 was a fan favourite!


Michelle Shyr - Engineer, OKO Unity Client Team

鈥淚n my last hackathon, I explored AR development and created an interactive experience where hand tracking and gesture recognition in AR could detect user gestures and translate them to display hand emojis in the OKO space. This year I hope to take it one step further by adding support for full-body pose recognition!

I enjoy R&D time and hackathons because they provide the freedom to explore new ideas without the usual constraints, often leading to innovative breakthroughs that wouldn鈥檛 emerge in a structured environment. The fast-paced nature of a hackathon makes problem-solving exhilarating, pushing me to think quickly, adapt, and iterate in real-time.鈥

As part of Michelle's hackathon project, she developed an AR experience that uses hand tracking and gesture recognition to display hand emojis in OKO.


We know that the path to technical innovation isn鈥檛 linear and doesn鈥檛 happen by accident 鈥 it happens when people are given the freedom to explore and experiment. R&D and hackathons aren鈥檛 just side projects at 草莓视频在线; they鈥檙e a vital part of how we stay ahead, challenge assumptions, and unlock new possibilities. The true success of a project is measured by the learnings made along the way, irrespective of whether it leads to a working prototype. Whether it鈥檚 refining existing tools, discovering unexpected solutions, or chasing a 鈥渢errible鈥 idea that turns out to be brilliant, this culture of continuous learning fuels everything we do. 

]]>
Keeping Curious: Why R&D and Hackathons Matter
Meet The Magnopians: Jon Raftery草莓视频在线Fri, 10 Jan 2025 14:00:52 +0000/blog/meet-the-magnopians-jon-raftery618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:677e966db6e87b259fe1c9b6

We sat down with Jon Raftery, one of our lead engineers who鈥檚 recently joined us to find out why he took a leap from the AAA games industry to escape 鈥渕ore of the same鈥.


You worked for many years in the games industry, why did you leave? 

Anybody who has paid any attention at all to the video games industry recently will be aware of the huge number of lay-offs that have taken place in the last few years. It鈥檚 been truly heartbreaking to hear about all these talented people losing their livelihoods. And it happened to me. 

After many years of happily working on a well-known strategy game franchise, and only two months after a promotion, I suddenly found my job disappearing from underneath me. To say I was surprised was an understatement, as I'd been a very loyal employee through good times and bad. But still, I found myself adding the dreaded green 鈥渙pen to work鈥 banner on my LinkedIn profile. The job market can be a harsh and brutal place if you haven鈥檛 dipped your toe in there in a while. I wanted a good job and fast, so naturally I started applying to roles that matched my skill set. Engineer. Team leader. Audio specialist.

As I started taking interviews I found myself getting a very strong feeling of deja vu. After working in a particular role for so long, and being interviewed by people in pretty much the exact position I was recently in, recruiting for the exact same role I鈥檇 just left, it all started feeling very familiar. The person sat opposite me... was me, just a few months ago. Did I really want to be doing the same thing in the same sized company for the same kind of game? The games, people, engine, etc, would be different. But I鈥檇 still be solving the same problems, fixing the same types of bugs and having the same conversations. Was I ready for 鈥渕ore of the same鈥? 

The answer was no and I started widening my search. 

How did you hear about us? 

I saw an engineer role advertised and I was intrigued. I鈥檇 heard the name 鈥湶葺悠翟谙哜 before from a game I'd played on my virtual reality headset; an immersive experience called where you explore the International Space Station. After some research, I found that the company was active in quite a few different areas. 

How would you describe what we do? 

草莓视频在线 is not a game developer, yet many AAA veterans work there and the team has developed hugely successful Fortnite events. It鈥檚 not a tech company, even though it has a tech platform 鈥 OKO 鈥 a suite of apps and plugins that allow users to create connected 3D spaces across realities. It鈥檚 not a media or production company, although the team has been involved with the recent Fallout television series. And it鈥檚 not a boutique XR studio, even though it鈥檚 created standout bespoke experiences for the likes of Disney, NBC, and Meta. The team has many talents, but its USP is helping others tackle unquantified challenges using technology and creativity. 

metallica-fuel-fire-fury-1920x1080-da7a35d725f2.jpg
OKO BG.jpg
Fallout Hero.png

Was it easy for you to leave the games industry behind? 

I thought that I had allowed my mind to fossilise into an AAA mindset. It was a little scary to imagine myself potentially jumping between quite different projects. Different tools, products, engines. And of course, different people working on them. I asked myself; was it sensible to consider a sideways jump at this stage in my career or should I stick with the familiar?

When I was lucky enough to be offered the job, I had a big decision to make. This wasn鈥檛 just a new job, it was a new direction in my career. A big part of me advised sticking to the safe path 鈥 more of the same. But if I鈥檇 always listened to those voices, I wouldn鈥檛 have gone into game development in the first place. 

How does 草莓视频在线 differ from the work you were doing before? 

Everything I鈥檓 doing now is as far removed from the job I had, and the jobs I was initially heading towards, as you can get. 

A big part of the job that鈥檚 new for me is the number of relationships I manage. I find myself working with different companies, people, and processes, and learning something new from all of them. This is a big change from the traditional developer-publisher relationship I was so used to in my previous company, and looking back it's clear just how insular I'd become as a developer. I'm now exposed to so much more and, although the learning curve has been steep at times, it's refreshing always to be taking on new challenges. Within the company, inter-project collaboration is encouraged, and this time next year it's possible I could be using a different programming language, on a different engine, for a different game, for a different client. All while working at the same company.

Do you work in the office or remote? 

草莓视频在线 has offices in the US and UK and offers remote work. This is something that's becoming increasingly scarce in our post-lockdown world of work. It's not that I dislike working in an office. I just dislike the time and expense of commuting for a job I can do literally anywhere with a laptop and a decent internet connection. As a bonus, the UK office is always open for anyone who wants to use it, and I'll head in every so often to meet up with my colleagues face-to-face. Often in the pub afterwards as well.

What do you hope people get out of reading this? 

I hope this serves as a salutary lesson about risk and reward. I loved my old job, and I'll always be very proud of what I achieved, but sometimes life throws banana skins your way and it's important to reframe them as opportunities. If the axe falls on you, don't rush to automatically limit yourself to 鈥渕ore of the same鈥. There are plenty of opportunities to thrive if you鈥檙e prepared to step outside your comfort zone.

What鈥檚 your favorite thing to do when you鈥檙e not working?

Aside from family time and playing video games, I try to counteract my screen time by keeping myself in touch with nature. Lots of long walks in the woods, gardening, gathering berries in the summer, and looking up at all the twinkling dots in the night sky. When you look at a distant star you are essentially looking at the past, as it takes the light so long to reach us. Due to light pollution, most people will live their lives having never seen the Milky Way. It's a sight to behold.

20240529_132734.jpg
20170430_112035.jpg
20240801_190244.jpg

What are you reading/listening to right now?

In terms of reading, I always find myself coming back to the poetry of John "No man is an island" Donne. He has a fascinating life story; from scandalous rake and "visitor of ladies" in his youth, to preacher at St Paul's Cathedral in his later years. That's quite a story arc, and he lived during a fascinating period of history. To counteract this cultural intake I also read lots of trashy Science Fiction novels, the less scientifically plausible, the better.

How do you want to leave a mark on the world 鈥 personally or professionally?

I think generally you should try to leave things in a better state than when you found them. There's also a lot of negativity on social media, which I make an effort to avoid. We carry all the world's information around on our smartphones, yet people are mostly using it to watch cat videos or get into arguments with strangers. As a parent, I find this concerning.

What鈥檚 your life motto/ guiding principle you live your life by?

The golden rule; "Treat others as you'd like to be treated yourself".

]]>
Meet The Magnopians: Jon Raftery
Building Cross Platform, Cross Reality, Social Experiences Using XRAlessio RegalbutoTue, 17 Dec 2024 10:35:20 +0000/blog/building-cross-platform-cross-reality-social-experiences-using-xr618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:675ae027f0e196049a2d9ac3

presenting at MetroXRAINE 2024

Earlier this year, our Global Director of Innovation and I presented 鈥淏uilding Cross Platform, Cross Reality Social Experiences Using XR鈥 at the conference. If you missed the event, this article captures the highlights of our presentation and provides details on our Connected Spaces Platform (CSP) and how it can help developers create interoperable social experiences using XR.

What is Cross-Reality (XR)?

XR is the convergence of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), creating environments where virtual and physical spaces blend seamlessly. It allows users to step into shared digital worlds, interact with persistent virtual objects anchored to real-world locations, and collaborate in real-time.

What is Cross-Platform?

Cross-platform XR ensures these immersive experiences are accessible on a variety of devices, from mobile phones and desktops to VR headsets and AR glasses. Whether you鈥檙e wearing a high-fidelity Meta Quest 3 headset or joining on a smartphone, the content remains consistent and synchronized.

Why Multiplayer is Essential to XR

Testing at 草莓视频在线 an XR multiplayer prototype

In a world that鈥檚 information-rich but experience-poor, the true magic of XR lies in bringing people together. Multiplayer XR experiences create shared environments where people can interact with each other in real-time 鈥 whether it鈥檚 a virtual concert where attendees in VR and AR see the same digital performers or a global team collaborating on a shared 3D prototype anchored in physical space. This kind of interaction opens up new possibilities for entertainment, collaboration, and community building.

A Multiplayer Example 

Screenshots from the Cryptic Cabinet open source project by 草莓视频在线 and Meta

Prototypes like Cryptic Cabinet, a mixed-reality escape room we developed with Meta, showcase what鈥檚 possible. Using Meta鈥檚 Presence Platform, this experience enabled multiple users to interact with digital objects within a physical room while maintaining low-latency synchronization. Such prototypes highlight how XR can blend the physical and digital to create engaging, real-time multiplayer experiences.

Technologies Making Cross-Reality Social Experiences Possible

We鈥檝e built a powerful toolset to tackle the unique challenges of XR development. Central to this is our Connected Spaces Platform (CSP) 鈥 an open-source middleware that simplifies the creation of cross-reality, cross-platform experiences.

discussing Connected Spaces Platform at

Schema of the architecture of Connected Spaces Platform

Key Features of CSP:

  • Real-Time Multiplayer: Low-latency, synchronized interactions across platforms.

  • Cross-Platform Compatibility: Operability with major engines like Unity, Unreal, and WebXR.

  • Spatial Anchoring: Ensures objects and interactions remain persistent across sessions and devices.

  • Scalable Infrastructure: CSP supports everything from intimate gatherings to large-scale events, adapting in real-time to user demand.

Using an alpha version of this tech, we created the world鈥檚 largest connected space for Expo 2020 Dubai. Over 39 months, we designed and developed a 4 km虏 digital twin of the Expo site, allowing millions of physical and virtual visitors to interact with a unified digital layer.

How It Worked:

  • Real-Time Synchronization: On-site and remote visitors experienced events in perfect sync, thanks to our real-time multiplayer capabilities.

  • Spatial Persistence: Digital objects and interactions remained anchored in the physical Expo site, allowing for a consistent and engaging experience across sessions.

  • Cross-Platform Accessibility: Participants could join via mobile, web, or VR, ensuring inclusivity for a global audience.

The project demonstrated how XR can bridge the gap between physical and digital spaces, creating a shared experience for people "there" and "not there."

Overcoming Technical Challenges

Building cross-reality, cross-platform social experiences isn鈥檛 without its hurdles. Developers face several key challenges:

  • Coordinate Systems: Every device uses its own coordinate system, requiring seamless transformations to ensure consistency.

  • Interoperability: Ensuring a seamless experience across hardware ecosystems like Meta Quest, ARCore, and WebXR requires extensive testing and integration.

  • Rendering Optimization: Different devices have varying performance capabilities, making it crucial to optimize rendering for smooth, immersive experiences.

We鈥檝e streamlined this process by developing several plugins for Web, Unity, and Unreal on top of the Connected Spaces Platform to handle these aspects for you. These are part of a set of cross-platform multiplayer applications that we call OKO.

Our APIs convert the coordinate system into the appropriate ones depending on the platform you are using them for and offer multiple functionalities to replicate transforms and related metadata to ensure that your player is correctly interoperable between different viewing modes (AR / VR / default), and different platforms (Unreal / Unity / Web).

Furthermore, our plugins adapt to the rendering performance of the targeted device (desktop or mobile) using performance optimization techniques such as LODs, culling, and simplified rendering pipelines for lower-end devices. 

To learn more about OKO, check out these resources:

What鈥檚 Next for XR?

As technology advances, several trends are shaping the future of XR:

  • Lightweight AR Glasses: Innovations like Meta鈥檚 Project Orion and Snap鈥檚 Spectacles are making AR more accessible and user-friendly.

  • 5G and Cloud Rendering: Faster data speeds and cloud-based processing allow devices to offload computationally intensive tasks, enabling lighter, more mobile hardware.

  • Gaussian Splats and Digital Twins: These technologies enhance realism in XR environments, improving spatial interactions and visual fidelity.

These trends will empower developers to build richer, more inclusive XR experiences, driving adoption across industries like healthcare, education, and entertainment. It鈥檚 an exciting time to explore what鈥檚 possible and to push the limits of how we connect in the digital age!

From the left: , , at

]]>
Building Cross Platform, Cross Reality, Social Experiences Using XR
Meet the Magnopians: Eugenia Chung 草莓视频在线Mon, 02 Dec 2024 11:08:35 +0000/blog/meet-the-magnopians-eugenia-chung618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:672ce14ed0fedf15f7351574

Eugenia is a Lead Producer at 草莓视频在线, working on projects like Metallica: Fuel FIre Fury, Karol G MSB Fortnite, the Fortnite Remix Concert in Times Square with Snoop Dogg and Ice Spice, and unannounced projects.

Previously, Eugenia has been a creative producer, creative director, and editor for various short and long-form projects including a Kyrgyz award-winning feature narrative film, 鈥淎fter The Rain (Jamgyrdan Kiin)鈥; VR concerts with Megan Thee Stallion, T-Pain, and KPop artists;  film projects with UN Women, the President of Kyrgyzstan - Roza Otunbayeva, and the US Embassy of Kyrgyzstan; and media projects with K-pop artists.


Tell us more about your role at 草莓视频在线

I鈥檓 one of the producers at 草莓视频在线, currently leading the Fortnite Concert experiences. If you鈥檙e asking what I do every day, as producers, we put out fires and unblock roads, all the while moving projects forward with the best excellence the team can in the limits of time, and budget, while still keeping a smile and running toward the vision set for the project. Sounds impossible? It often feels that way, but that's what all the producers at 草莓视频在线 are doing every day - looking dead straight in the eyes of the impossible, and smug at it with optimistic anxiety. We鈥檙e the coxswains of the crew, steering the team鈥檚 morale, rhythm, and velocity to the direction we鈥檙e headed, all the while in the background paving out bumpy roads in advance for the team to have a smooth course as much as possible. We become everything and anything the team and project needs, pulling up our sleeves and wrestling with the mountain. I once was told by my former mentors that, 鈥渢he best form of a producer is one that is always in service of the team and vision of the project.鈥

Personally, what keeps me going every day when I wake up, is when I remind myself and the team of the joy of creating projects that are technically groundbreaking and culturally paradigm-shifting, saying, 鈥淚f it were easy, anyone could do it. Alas, we are the chosen ones!鈥

What has been your proudest/standout moment while working at 草莓视频在线?

Aside from the obvious moments of seeing projects launch and receiving a lot of praise, and seeing the team鈥檚 faces gleaming with pride -  personally, I would say when I see the spark of fun and glitter of passion in the eyes of the team, as they go forth and conquer impossible obstacles, trying to find a way to build and innovate around the blockage and see creative juices flowing elevating the excellence of the project to another level.

What attracted you to 草莓视频在线?

The MO of 鈥淲here physical & digital meet to create extraordinary experiences.鈥 - which is in line with my personal MO of wanting to be at the forefront of the 鈥渆volution of storytelling that bends reality.鈥

What made you decide to pursue a career in this field?

I discovered that I had a passion for storytelling when I was in high school, working a summer job at a community art center making videos. I felt a spark and a drive to create stories that impact people, and eventually the way people think and feel, influencing culture. That passion kept fueling me to where I am today.

What鈥檚 your favorite thing to do when you鈥檙e not working? 

Searching for the perfect coffee shop on weekends (Good hand drip coffee, roasts their own beans, spacious high ceiling hip vibes, big windows with perfect nature views). Thrift hunting for vintage furniture and binge-playing the latest games.

What would you consider to be your 鈥榮uperpower鈥?

If I really wanted to and my schedule allowed it, I could sleep more than 20 hours. Sleep is my zen. Or again, if I had free time, I could binge-play an entire game without stopping.

How do you want to leave a mark on the world 鈥 personally or professionally?

Make cool sh*t that impacts the culture, that the next generation will grow up in and experience. As someone in the realm of creativity, I have the privilege and responsibility that things I am involved in reach the masses, and have influence in small and big ways. By supporting and serving my team with excellence, generosity, and fun, it eventually influences the stories and projects the team creates, which impacts the audiences in ways that hopefully leave a mark of joy.

If you had unlimited resources and funding, what project or initiative would you launch?

HA! It鈥檚 top secret and confidential! 馃槈 But the gist is - when the technology and the fidelity come to a point, where you can鈥檛 differentiate between digital and reality, I have many ideas and stories I鈥檝e been brewing over the years that I want to create. Just waiting for the tech and times to catch up for what I鈥檓 envisioning to become possible.

What's a unique tradition or ritual you have in your life that brings you joy or fulfillment?

Turn up some music at the start of the day while brewing hand-drip coffee and lounge by the window with a blank state of mind - priceless.

If you could wake up in the body of another person (just for one day) who would it be and why?

If it's 鈥渁 person鈥, probably myself in the far future - I want to see where all of this is headed. If it's anyone, then my cat - what a good life she has.

]]>
Meet the Magnopians: Eugenia Chung
Blending Pop Culture and Virtual Reality: Crafting Immersive Experiences in Meta Horizon草莓视频在线Wed, 27 Nov 2024 12:02:39 +0000/blog/blending-pop-culture-and-virtual-reality-crafting-immersive-experiences-in-meta-horizon618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:6746fd91de76051615ed2be1Virtual reality (VR) is changing how we engage with entertainment by offering more immersive, interactive environments. In this article, we鈥檒l share our thoughts on how developers can use pop culture to enhance social VR experiences, drawing on our work creating The Office World for Meta Horizon. We'll explore how familiar cultural references can make virtual spaces more inviting, helping users feel connected and comfortable as they navigate these new environments.

Using Pop Culture to Create Emotionally Resonant Virtual Worlds

Pop culture has an unmatched ability to unite people through shared interests, memories, and emotions. Whether it's iconic moments from movies, beloved TV characters, viral memes, or nostalgic video games, it serves as a cultural shorthand that instantly resonates across diverse groups. In virtual environments, users not only observe but actively engage with these references, creating new, personalized experiences around them.

For developers, integrating pop culture into virtual worlds goes beyond simply replicating familiar scenes or characters. It's about leveraging the emotional connections people have with these properties to create a connection. By embedding elements of pop culture, developers can create spaces where users feel more at home, sparking interactions based on shared fandoms that might not have happened otherwise. 

Pop culture references also act as a gateway to accessibility in VR. For new users, stepping into a virtual world filled with recognizable elements from movies, music, or games makes the experience less intimidating. It provides a familiar entry point, helping users feel instantly connected to the environment. 

When building The Office World, we were committed to faithfully recreating the iconic Dunder Mifflin office. Yes, that meant watching a lot of TV 鈥 all in the name of research, of course (and regular reviews with our IP stakeholders). From Pam's reception desk to Michael鈥檚 office, and even the wall where the show鈥檚 memorable 鈥榯alking head鈥 interviews were filmed, we made sure players would instantly recognize the key landmarks that bring the world of The Office to life.

Developing Immersive Pop Culture Experiences

Creating an engaging pop culture experience begins with understanding the audience. Developers need to know what fans love about a particular property and how they want to interact with it in a virtual space. But, it鈥檚 important to strike a balance between making the experience accessible to newcomers and rewarding hardcore fans with Easter eggs and references they will appreciate.

A deep understanding of how to harness the interactive nature of virtual reality is also required. Developers should focus on crafting experiences that allow users to go beyond passive consumption and actively participate in the world. For instance, building interactive challenges or storylines based on key moments from a show or movie can create a sense of agency for users, allowing them to become part of the narrative.

In The Office World, players can step into iconic moments from the series through engaging mini-games. These experiences allow users to actively participate in the narrative, such as navigating through the chaotic Kevin's Chili-Thon, where they must carry Kevin's infamous pot of chili while dodging office obstacles. Additionally, players can sort and deliver packages in 鈥淧ackage Pickle鈥, where efficiency and speed earn them extra Shrute Bucks. By incorporating familiar scenarios and interactive challenges, The Office World enables fans to feel like they鈥檙e part of the show, enhancing their connection to the beloved characters and storylines.

Additionally, adding social elements 鈥 such as spaces for fans to collaborate, compete, or showcase their creativity 鈥 can further deepen engagement, transforming the experience from a simple virtual homage into a dynamic, evolving environment where the community plays an active role in shaping the world.

Designing for Retention and Replayability

One of the critical goals in developing virtual worlds is retention 鈥 encouraging players to return to the world time and time again. This requires a focus on more than just the initial "wow" factor of the experience; it鈥檚 about creating a world that evolves with the player. Drawing from game design principles, developers can implement progression systems, challenges that scale with player skills, and social mechanics that encourage collaborative play. For example, rather than just allowing users to explore a static environment, developers can introduce missions, unlockable content, or evolving narratives that provide players with reasons to continue their journey. 

In The Office World, players can fully immerse themselves in the quirky culture of Dunder Mifflin by climbing the ranks and earning promotions to various job titles featured in the show. The game fosters a dynamic social environment, encouraging roleplay through interactive features such as sitting at reception and pretending to be Pam, or pushing the 鈥淵ou Are Late鈥 button for comedic effect. 

When the office is overrun by bats, players can choose to channel their inner Dwight and bravely attempt to catch them, or panic and flee like the rest of the staff. Additional features like a stamp in the accountants' section allow players to approve any document in the office, while vending machines offer Dunderballs for a fun game reminiscent of classic moments from the series. With global leaderboards, unlockable customization items, and unique roleplaying scenarios, players have new objectives and social experiences every time they return, keeping the world dynamic and engaging.

Collaborating with Industry Partners

In the development process, collaboration with external partners or intellectual property (IP) owners can be a game-changer. Direct access to creators and producers allows developers to stay true to the original vision while adding layers of interactivity that make the experience fresh and exciting. 

These collaborations often provide access to a wealth of creative resources, from concept art and storyboards to behind-the-scenes insights into the creators' vision. This inside knowledge ensures that the virtual world not only reflects the original work but also adds new layers of interactivity and immersion that are possible only in VR. Moreover, working closely with IP owners opens the door to early feedback, which can be crucial in fine-tuning the experience for authenticity. For example, integrating the distinct tone, themes, and even fan expectations can elevate a project from a simple adaptation to a truly immersive extension of the brand. These partnerships not only foster creative innovation but also help developers navigate the complexities of intellectual property, ensuring that the final product aligns with both the brand's identity and the potential of virtual worlds.

During the development of The Office World, we worked closely with Meta, NBC, and Deedle Dee Productions. Regular check-ins and review meetings were essential to maintaining the integrity of the intellectual property and nailing the show鈥檚 signature humor. What made the process even more unique was that many of these meetings took place within VR itself. While these sessions were often fun and playful, they also played a crucial role in helping us fine-tune the world and its features. 

The Unique Advantages of Meta Horizon

Meta Horizon offers several advantages when it comes to building entertainment experiences. First and foremost, it is a social platform, where users can connect and interact with others in real-time. This communal aspect makes it an ideal environment for pop culture experiences, where fans can gather, share their passion, and even create their own narratives within the virtual space.

The platform's flexibility also allows developers to experiment with different types of interactivity, from narrative-driven adventures to freeform exploration and multiplayer challenges. By blending traditional game design with the open-ended nature of social VR, developers can create experiences that keep players coming back for more.

Best Practices and Looking Toward the Future

Developing successful pop culture worlds in Meta Horizon requires a blend of creativity, technical skill, and a deep understanding of the community. Here are some best practices for developers looking to enter this space:

  1. Focus on the Community: Create experiences that allow users to engage with each other, not just the environment. Social interaction is key to keeping players invested.

  2. Balance Accessibility and Depth: Make sure your world is enjoyable for both casual users and hardcore fans. Offer layers of content that cater to different levels of engagement.

  3. Collaborate with Partners: Working with IP holders and creators can provide valuable insights and ensure the experience remains true to the original vision.

  4. Design for Longevity: Implement game mechanics that encourage repeat visits, such as progression systems, evolving challenges, and unlockable content.

As VR continues to grow, the opportunities for integrating entertainment and pop culture into virtual worlds are expanding, giving developers the chance to create experiences that bring people together in new and exciting ways.

]]>
Blending Pop Culture and Virtual Reality: Crafting Immersive Experiences in Meta Horizon
Meet the Magnopians: Sian Adeoye草莓视频在线Fri, 08 Nov 2024 09:25:28 +0000/blog/meet-the-magnopians-sian-adeoye618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:67161da3080ffb79fad79da7

Si芒n Adeoye is a product manager with nearly six years of experience. She began her career in the insurance industry and transitioned to 草莓视频在线 in 2022. Based in London, but often traveling the world, she has a diverse background that includes roles as a Digital Product Analyst and actuarial intern at AEGIS London, as well as internships at Allianz and RBS.


Tell us more about your role at 草莓视频在线

I鈥檓 a product manager for two of our client teams (Unreal and Web), and my job is to define and prioritise the features that we want to build, and then oversee their development to ensure that our product is meeting the needs of our users. Our Technical Director recently wrote a great blog post introducing our product to the world, you can check it out here.

What has been your proudest/standout moment while working at 草莓视频在线?

My proudest moment was receiving a promotion! It validated all my hard work and growth that I experienced during my first year at the company and reassured me that recognition and career progression are genuinely valued here. 

What attracted you to 草莓视频在线?

When I first came across the job listing, OKO wasn鈥檛 a public product. But based on their previous projects 鈥 which were amazing 鈥 I knew whatever product was being cooked up behind the scenes at 草莓视频在线 was something I wanted to get involved in! The cherry on the top was when I saw the LinkedIn tagline, 鈥淣ot sucking since 2013鈥.

What moment has had the most significant impact on your life?

Aged 5, I joined a chess club at school because other girls in my class said they gave out a Haribo at the end of each session and I had a killer sweet tooth. I ended up really enjoying it, and before I knew it I was spending most of my weekends for the next 7 years playing competitively in tournaments all over England. 

What made you decide to pursue a career in this field?

Honestly, I found my way into product management completely by chance! I graduated with a degree in chemical engineering at university but knew that I wanted to explore a new career path. During a summer internship, I got an unexpected job offer to join the company's newly formed product team and thus began my introduction to product management. I love the challenge that comes with driving product development as it鈥檚 the perfect blend of strategy and creativity, and it feels great to play a part in transforming ideas into reality. 

How do you approach challenges and setbacks?

I try to break a problem down into smaller chunks so it doesn鈥檛 feel overwhelming and tackle one issue at a time. My years playing chess taught me to stay calm under pressure, adapt my strategy when needed, and try not to take things personally!

If you were on a game show, what would be your specialist subject?

I feel like I鈥檓 a living library of random facts so I鈥檓 more suited to Richard Osmands House of Games than Mastermind. I鈥檇 also love to go on Taskmaster, the randomness of the tasks appeals to me and I love seeing how people think outside of the box to win.

What are you reading/listening to right now?

I鈥檓 currently reading 鈥榃eapons of Math Destruction鈥 by Cathy O鈥橬eil which explores how algorithms used in everyday life can perpetuate biases and the importance of critical thinking when working with data. 

My current album on repeat is HEIS by Rema - listening to this in the gym is a cheat code for a powerful workout. 

Where is one place you want to travel to in the world?

I love traveling and have been lucky enough to tick off most of my top bucket list destinations. I think next on my list would be Vietnam, I鈥檝e heard a lot of good things through friends and want to soak up the culture & history for myself 鈥 the food is also a big bonus! 

What鈥檚 your go-to karaoke song?

I recently did Karaoke for the first time while out in Japan and completely lost my voice attempting to sing 鈥楤reakeven鈥 by the Script (those high notes on the chorus are no joke!). 

]]>
Meet the Magnopians: Sian Adeoye
Behind the Scenes of Karol G鈥檚 Concert in Fortnite: Combining Creativity and Technical Innovation草莓视频在线Thu, 31 Oct 2024 18:51:39 +0000/blog/behind-the-scenes-of-karol-gs-concert-in-fortnite-combining-creativity-and-technical-innovation618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:6723d1b38f6add26fcdd5382Karol G MSB Fortnite was a concert experience built from the ground up in UEFN (Unreal Editor for Fortnite). Throughout the five-part playable concert, users collected EmPower-Ups to blast away negative energy and spread positive vibes in a neon-drenched party. The experience introduced Karol G鈥檚 infectious Latin rhythms to newcomers while offering longtime fans a new way to enjoy her music. The creative and technical elements worked hand-in-hand to create a vibrant, digital world. We sit down with some of the team to learn about the making of Karol G MSB Fortnite.

Kevin Hanna, Creative Director, was instrumental in the concert鈥檚 creation. We ask him how the project came to life: "UEFN was crucial from the start. The ability to rapidly iterate was game-changing. We didn鈥檛 just want to create a passive event 鈥 we wanted it to feel like an interactive music video, where every moment meant something to the player. Whether creating special effects or ensuring that enemies moved in time with the beat, UEFN allowed us to play with those elements without a lot of downtime. That flexibility let us focus on crafting an experience that was as interactive as it was visually engaging."

One fan-favorite sequence was the waterslide, where players raced alongside Karol G while blasting away enemies. Hanna elaborates: "That waterslide sequence went through several iterations 鈥 five fully playable versions 鈥 before we finalized the one with the Joy Cannon. With UEFN, we could snap things together, test them in real-time, and see how they felt in context. The real-time adjustments meant we could experiment, fail fast, and move on, which ultimately helped us create something players really connected with."

Addison Herr Lead Designer, tells us about cross-discipline collaboration. 鈥淲ith the real-time feedback loop, we could prototype and test ideas immediately, whether it was gameplay or audiovisual elements. I worked closely with the art department as well as our other designers, Daryl Kimoto, Gladeline Rufo, and Thomas Wilson, and we were constantly learning from one another 鈥 they explored the intricacies of building complex mechanics in UEFN, while I gave input on audio-reactivity, shaders, and game feel. We pulled from our diverse backgrounds, ranging from rhythm game design to real-world concert visuals 鈥 to find a sweet spot of merging the spectacle of an arena show with satisfying interactivity.鈥

We ask Lisa Barber, the environment artist responsible for much of the concert鈥檚 look, how UEFN assisted in building the environments. "UEFN鈥檚 asset library was a lifesaver. We needed a huge variety of very specific visuals 鈥 like glowing, iridescent mountains, beautiful pink foliage, and over five different unique settings 鈥 UEFN made it easy to find what we needed. It meant we could focus on perfecting the world rather than getting bogged down in technicalities."

Motion capture was part of the production process, Gabriela Montes, Rigger, explains its role in detail. 鈥淲e used motion capture for everything related to Karol G鈥檚 performance, from her dance moves to facial expressions. UEFN鈥檚 character device allowed us to play the motion capture on various character types, outside of Karol G herself,  with the click of a button! This flexibility meant that her character animations, as dynamic and lifelike as they were, could easily be leveraged for use throughout the project, like for the background dancers.鈥

Montes continues, 鈥淭he amazing motion capture quality was all thanks to the people at Yoom 草莓视频在线s, Metastage, and Brazen Animation. Yoom captured data that consisted of facial data, body movement, and lipsync in one cohesive file, with production and facilitation effort from Metastage, while Brazen Animation added their animators' touch. All of this was critical for a project of this scale.鈥

Creating custom arcade shooting mechanics 

On the technical side, Derrick Canfield, Engineering Lead, who was responsible for the arcade shooting mechanics discusses in detail how that came together, starting with the design.

鈥淥ne challenge was finding a way to implement custom arcade shooting mechanics that aligned with Fortnite鈥檚 fast-paced environment, while ensuring it wasn鈥檛 too difficult for casual players. We used UEFN to build custom devices that allowed us to control shooting mechanics and create unique physics for the targets. This way, players could get instant feedback without overwhelming them with too much complexity.

He continues, breaking down the core mechanics, 鈥淥ne of those mechanics was an arcade-style shooting section while players rode on a waterslide as Karol G performed. Here, we chose to have shots fired from the player instantly travel to the target, rather than simulating the arc of a projectile. This approach provided instant feedback, enhanced the arcade-like feel, and simplified our logic. Additionally, because all enemies were close to the player, the projectile's travel time would be negligible.鈥

Canfield explains how the problem was further simplified, 鈥渨e opted to use spheres as the collision volumes for the targets. Raycasting against a sphere is fairly trivial, and we wanted the shooting to be forgiving in this experience. So, even if you aimed slightly above the shark's head, you'd still be rewarded as if you hit the target.鈥

Using Verse, Epic Games' programming language for UEFN

Next, we hear how Verse was used, 鈥淲e created a module named 鈥淧hysics鈥 with a function for ray-sphere collision testing.

#Copyright (c) 草莓视频在线. All Rights Reserved.
using { /UnrealEngine.com/Temporary/SpatialMath }

Physics<public> := module:

    RaySphereTest<public>(RayOrigin:vector3, RayDirection:vector3, SphereCenter:vector3, SphereRadius:float)<transacts><decides>:float=
        M := RayOrigin - SphereCenter
        B := DotProduct(M, RayDirection)
        C := M.LengthSquared() - SphereRadius * SphereRadius

        # We know they do not intersect so we can exit early with a `false` value
        not (C > 0.0 and B > 0.0)

        Discriminant := B*B - C

        # When the discriminant is negative, the ray and sphere do not intersect
        not (Discriminant < 0.0)
        
        option{ Max(0.0, -B - Sqrt(Discriminant)) }?

The function accepts an origin and direction for a ray, as well as a sphere center and radius. If the ray passes through the sphere, the distance between the ray鈥檚 origin and the hit point is returned. If there is no intersection, the function fails using Verse鈥檚 <decides> specifier.鈥

For those new to Verse, Derrick explains its unique specifiers, 鈥淯nique specifiers, such as <decides> and <transacts>, play a crucial role in controlling how functions behave and interact with the game state.

  • <decides>: This specifier is used when a function needs to determine or "decide" a result that may not always produce a value. For example, in our ray-sphere collision test, <decides> is used because the function may or may not find a collision point, depending on whether the ray intersects the sphere.

  • <transacts>: This specifier indicates that a function can modify or "transact" changes to the game state. It鈥檚 particularly important when you need to ensure that operations happen in a controlled, atomic manner, such as updating scores or player positions in real-time.

For those looking to deepen their understanding of these terms and other unique elements of Verse, we recommend checking out . This resource provides detailed explanations and examples, making it easier to get up to speed with the language鈥檚 capabilities and best practices.鈥

Arcade Shooter Minigame Devices

Canfield, walks us through how the custom Verse devices for the arcade shooter minigame were developed, especially the mechanics behind the target system and how it all comes together during gameplay.

鈥淭he target system is pretty straightforward. We created a device that holds a reference to a prop of our choice, along with a collider radius. The internal logic manages the target's state鈥攚hether it's active or not鈥攁nd includes functions for spawning, despawning, and handling what happens when the target is hit. This code is barebones so a creator can extend it for their island鈥檚 needs.

using { /Fortnite.com/Devices }
using { /Verse.org/Simulation }

arcade_target_device := class(creative_device):
    @editable
    Prop<private>:creative_prop = creative_prop{}
    
    @editable
    ColliderRadius<public>:float = 150.0

    var Active<private>:logic = true

    OnBegin<override>()<suspends>:void=
        Despawn()

    IsActive<public>()<transacts><decides>:void=
        Active?
        
    Spawn<public>():void=
        Prop.Show()
        set Active = true

    Despawn<public>():void=
        Prop.Hide()
        set Active = false

    Hit<public>(Agent:agent):void=
        # Add particle effects, sound effects, etc
        Despawn()

The minigame kicks off when the player steps on a trigger and ends when a timer runs out. 

using { /Fortnite.com/Devices }
using { /Fortnite.com/Characters }
using { /Verse.org/Simulation }
using { /UnrealEngine.com/Temporary/SpatialMath }
using { Physics }

arcade_shooter_minigame_device := class(creative_device):

    @editable
    BeginMinigameTrigger:trigger_device = trigger_device{}

    @editable
    EndGameTimer:timer_device = timer_device{}

    @editable
    FireTrigger:input_trigger_device = input_trigger_device{}
    
    @editable
    WeaponGranter:item_granter_device = item_granter_device{}
    
    @editable
    Targets:[]arcade_target_device = array{}

    var FireTriggerCancellable:?cancelable = false

    OnBegin<override>()<suspends>:void=
        BeginMinigameTrigger.TriggeredEvent.Subscribe(BeginMinigame)
        EndGameTimer.SuccessEvent.Subscribe(EndMinigame)

    BeginMinigame<private>(MaybeAgent:?agent):void=
        BeginMinigameTrigger.Disable()

        # Give all players a gun to use
        for(Player:GetPlayspace().GetPlayers()):
            WeaponGranter.GrantItem(Player)

        # Enable the fire trigger and listen for fire events
        FireTrigger.Enable()
        set FireTriggerCancellable = option { FireTrigger.PressedEvent.Subscribe(OnFirePressed) }

        # Spawn all targets
        for (Target:Targets):
            Target.Spawn()

        # Start the timer
        EndGameTimer.Start()

    EndMinigame<private>(MaybeAgent:?agent):void=
        BeginMinigameTrigger.Enable()
        
        # Disable the fire trigger and remove the event callback
        FireTrigger.Disable()
        if (EventCancellable := FireTriggerCancellable?):
            EventCancellable.Cancel()
        
        # Despawn any targets that still remain
        for (Target:Targets):
            Target.Despawn()

        # Reset the timer for the next time the minigame is played
        EndGameTimer.Reset()

    BulletRaycast<private>(GunAimDirection:vector3, GunPosition:vector3, Target:arcade_target_device)<transacts><decides>:vector3=
        # Execute a ray-sphere cast to see if we hit the target
        TargetTranslation := Target.GetTransform().Translation
        HitDistance := RaySphereTest[GunPosition, GunAimDirection, TargetTranslation, Target.ColliderRadius]
    
        # If the target is hit, reconstruct the hit position that will be returned by this function
        HitPosition := GunAimDirection * HitDistance + GunPosition
        option{HitPosition}?

    OnFirePressed<private>(Agent:agent):void=
        if (FortChar := Agent.GetFortCharacter[]):
            PlayerTranslation := FortChar.GetTransform().Translation
            CameraRotation := FortChar.GetViewRotation()

            # Iterate through all active targets to see if any were hit
            for (Target : Targets, Target.IsActive[], HitPosition := BulletRaycast[CameraRotation.GetLocalForward(), PlayerTranslation, Target]):
                Target.Hit(Agent)

In the OnBegin function, we assign BeginMinigameTrigger to our logic to start the minigame as well as setting EndGameTimer鈥檚 completion event to the logic that ends it. 

Once it starts, we grant all players a gun and enable an input trigger that listens for when they fire. When they press 鈥渇ire鈥, it runs the raycast logic we set up earlier to check if they hit any active targets. If a target is hit, we trigger the Hit(agent) function on the target. From there you can add any additional logic you need, such as custom scoring or integration with any other game mechanics.鈥

Visualizing Logic with Debug Draws

Derrick dives into how debug draws can help when developing. 鈥淒ebug draws are incredibly valuable when it comes to visualizing the logic in your code, particularly for something as intricate as ray-sphere collisions. In our case, you could use them to visually trace the ray鈥檚 path and define the boundaries of the sphere within the game environment. This kind of visualization lets you see exactly where the ray intersects the sphere or if it misses entirely, making it much easier to debug and refine your collision logic.

For example, you can draw the ray with DrawDebugLine and the sphere with DrawDebugSphere. By doing so, you get instant visual feedback right in the game world, which helps confirm that your collision detection is functioning properly.鈥

Bringing it all together 

Derrick tells us how to bring everything together in the level, especially when setting up player input and devices for the minigame, 鈥淲e set up all of the devices in the level and hooked them together.

The first item we set up was the input trigger for the player鈥檚 fire action. To do that, add an Input Trigger device to your scene. Set 鈥淚nput Type鈥 to 鈥淪tandard Action鈥, and 鈥淪tandard Input鈥 to 鈥淔ire鈥. This will make the input trigger when any player presses the fire button.

Next, you鈥檒l need an Item Granter to give the player a weapon to shoot the targets. I chose a rifle, but you can pick whatever fits your theme best. After that, you need to create a Trigger Device and a Timer Device. Both of these devices work well with their default settings, but you might want to adjust the timer duration to suit the experience you're crafting.

Derrick goes on to talk about the fun part - targets to shoot at! 

鈥淐reate an Arcade Target Device which you can find wherever your Arcade Target Device logic exists within your project. Disable 鈥淰isible in Game鈥 so the terminal mesh is not rendered. Also, find a prop to be the visual representation of your target and parent it underneath the Arcade Target Device. Adjust the prop so that the center point is aligned with the origin of the device鈥檚 mesh. In this example, I鈥檝e chosen a disco ball and you can see it aligned to the center of the bottom of the console mesh which is the pivot point. With the Arcade Target Device selected, set 鈥淧rop鈥 to the prop you created and set a collider radius to a value that would encapsulate the prop.

Then, select the prop and set the property 鈥淐an be Damaged鈥 to false. This ensures the default shooting logic in Fortnite doesn鈥檛 damage and destroy the prop. Our Verse logic is handling the destruction effects already! Duplicate the device and prop as often as you like and place them around your map as you see fit.鈥

We ask Canfield about the final inclusion, 鈥淭hat鈥檚 the Arcade Shooter Minigame device. Create an instance by dragging it from the Content Browser wherever you saved the Verse file in your project. Again, set 鈥淰isible in Game鈥 to false to hide the default terminal mesh. Set BeginMinigameTrigger to the trigger created earlier, EndGameTimer to the timer created, FireTrigger to the input trigger, and WeaponGranter to the weapon granter we set up. Then for Targets, add every copied target to this array. You may want to click the lock icon in the top-right corner of the Details Panel so it doesn鈥檛 change to show the properties of the arcade targets if you drag and drop references into the Targets array.鈥

He concludes, 鈥淵ou can now jump into a session and play through the minigame. Step on the trigger to equip yourself with a gun and show all of the targets. Pull out your weapon and shoot the targets before your time runs out! If you follow these instructions you鈥檒l have a fully functioning arcade-style shooter! The minigame is barebones, ready to be expanded upon with whatever fits the creator's needs 鈥 a custom scoring system, special effects, custom meshes for your targets, and more!鈥 

Finally, we asked how the concert was received by both the audience and your team. Herr tells us, "We were unsure how people would react at first, but the feedback was incredible. It was especially meaningful to see the response from fans in a number of languages across social media platforms, particularly during the closing dance-off and in the epilogue. The climactic scenes of the experience celebrated unity and empowerment, and to see people across the world connect with all the love and care we poured into this concert was so rewarding."

Montes adds, "As a Latina artist, it was personally significant for me. Being able to share this project with my family, especially in a Spanish-speaking context, was really special. The emotional impact extended beyond the audience 鈥 it resonated with us as creators."

]]>
Behind the Scenes of Karol G鈥檚 Concert in Fortnite: Combining Creativity and Technical Innovation
AJ Sciutto speaks to Meta about the creation of Alo Moves XRExternalFri, 11 Oct 2024 10:06:22 +0000https://www.meta.com/en-gb/blog/quest/alo-moves-xr-magnopus-yoga-pilates-mindfulness/618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:6708f56374cff23635705746To celebrate the launch, Director of Virtual Production AJ Sciutto spoke with the Meta Quest blog team about the entire process of creating the experience, from how the partnership came about to lessons learnt, and shares why immersive experiences are the future of fitness. Read the full in-depth interview , and check out  on the Quest Store! 

Permalink

]]>AJ Sciutto speaks to Meta about the creation of Alo Moves XRFrom pages to spaces: the internet's evolution and a new era of interoperabilitySam BirleyFri, 11 Oct 2024 09:19:55 +0000/blog/from-pages-to-spaces-the-internets-evolution-and-a-new-era-of-interoperability618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:67079b3a9ee7ce49f9931dcc

Today, the internet is a series of connected pages. You move from one link to another, viewing two-dimensional pages on your device. It鈥檚 not how humans were built to reason about the world, but it鈥檚 a self-evident truth that it revolutionized how humanity communicates.

But as with all things in life, technology evolves.

With hardware such as Orion, Vision Pro, Quest, and Ray-Ban Meta smart glasses now available/actively being developed, it鈥檚 increasingly easy to imagine that the internet of tomorrow will no longer be a series of connected pages, but a series of connected spaces. A spatial web.

As humans, we're hard-wired to reason about the physical world 鈥 it's in our DNA. When we interact with objects, we have expectations about how they'll behave. When we change something in the real world, we expect it to stay changed. We expect others to see the same thing in the same place, and we expect our interactions to have meaning.

It stands to reason then that a spatial web should behave like this too. A spatial web introduces not just a literal third dimension, but also a means to, finally, meaningfully connect our digital and physical realities. It's not just about making the internet 3D; it's about bringing it into our physical world in a way where it behaves and reacts to you based on the same paradigms as reality.

That might sound impossible with today鈥檚 technology, but it鈥檚 not. With the (CSP), a platform we designed to support the concept of an interoperable spatial web, the idea of giving users a way to reason about the digital world as they do with the physical world isn't science fiction; it's engineering, and many of the technical challenges have now been solved.

For instance, the services required for a spatial web aren鈥檛 that different from those we depend on today; user accounts, asset management, geolocation, VOIP. Stuff that鈥檚 been around for decades. Sure, there are new ones that need to tie into those (such as for handling large volumes of users in persistent worlds, and anchored digital realities superimposed over the physical world), but it鈥檚 feasible. And not just theoretically feasible - we鈥檝e proven it works with our own 草莓视频在线 Cloud Services (which we鈥檝e designed to plug natively into CSP).

So we know that with a set of spatial web services backing it, whichever those are, CSP allows for many users to be co-present in the same physical space, all of whom are seeing a digital layer superimposed over reality, at the same time and in the same place. And when someone changes something in that digital layer, others can see the same change in real-time, and it persists, just as the marks we make on the real world do. That's incredibly powerful.

But communicating that idea has been a challenge. Blog posts, documentation source code, none of them really cut it when it comes to conveying the value of this idea. Much like VR, it's tough to fully grasp this concept with words alone. You can't truly understand how it feels to be in a VR experience until you put on the headset, and you can鈥檛 really appreciate the impact of a persistent spatial web without seeing it for yourself.

Companies like Epic Games have successfully shown the power of first-party titles as a vehicle to visually demonstrate the value of the software underpinning it. It鈥檚 been a key factor in Unreal Engine鈥檚 widespread adoption across the gaming industry; creating many outstanding examples that have allowed people to truly understand what can be achieved with the technology.

Which is where OKO comes in鈥

OKO

So, what exactly is OKO? Functionally, it鈥檚 a set of applications we've been creating over the past four years, all built leveraging the Connected Spaces Platform. Of those, we have three that are publicly available.

  1. for capturing reality and supporting co-presence through AR.

  2. built on top of CSP's C++ API. This allows experienced Unreal users to create best-in-class content with the tools and workflows they're already familiar with, and easily bring that content into OKO spaces.

  3. that leverages CSP's TypeScript API, providing users a frictionless way to access or even build their own spaces on any device, anytime, anywhere.

Since all of these applications are built on CSP, and since interoperability is itself a core value of CSP, they all communicate seamlessly with one another (despite being built in different engines, tech stacks, and languages). Using the best features of each engine and platform, they make use of nearly all of the features of CSP, and so are a wonderful testament to what鈥檚 possible with the library.

So, if you want to understand whether the features and concepts in CSP might be useful or valuable to you, you will find your answer through OKO.

Let鈥檚 be clear though, OKO is not a game engine like Unreal or Unity. It鈥檚 not a DCC tool like Blender or Maya. It鈥檚 not an open-source editor like Godot or even an asset sharing platform like Fab.

OKO is a suite of entirely interoperable apps that enable creators to build things like digital twins, virtual production scouting, and cross-reality events. And with OKO, they will all be interactive, persistent, and connected; synchronized across realities. OKO is a spatial web ecosystem and anyone can create an account for free.

It鈥檚 been a journey to get here, and the path we鈥檝e taken is worth knowing about, because it tells you where we鈥檙e going.

We鈥檝e been fortunate to be in this space since the beginning. Through a lot of hard work, talent, and more than a little luck, we鈥檝e gone on to work and deliver on some of the most challenging and interesting problems when it comes to mixed reality and cross-reality

The OKO journey started with the World Expo in Dubai and a wonderfully evocative brief from Her Excellency Reem Al Hashimi: 鈥淐onnecting minds, creating the future.鈥 We wondered: 鈥淐ould we share the experience of the physical site with people around the world, and connect them across both?鈥 

With the support of the leadership of Expo to pursue this ambitious vision, we assembled a diverse team of over 200 engineers and artists, working across seven countries. Together, we created a city-scale cross-reality space where on-site and remote visitors to Expo 2020 Dubai could connect in real-time in shared experiences.

As you might expect, there isn鈥檛 a lot of off-the-shelf tech to solve a problem like that. To deliver the project (which at 4.38km虏, four years on, is still the largest ever anchored digital twin) we had to build a lot of tech from scratch. And after we were done, it became clear to us that a lot of what we had invented was incredibly powerful. That a less opinionated version of the technology would be transformative for a whole range of industries.

And that鈥檚 when CSP and OKO were born.

Since then, we haven鈥檛 stopped building them. As the industry changed, and the projects rolled through, we鈥檝e continued to learn about what makes sense in this space, what problems need solving, what problems 诲辞苍鈥檛 (also interesting!), and through OKO, we鈥檝e expressed our opinion on how to solve those problems. To date, we鈥檝e mostly built OKO for us; we 诲辞苍鈥檛 expect it to be all things to all people when it comes to the spatial web, but we think it will help many see what鈥檚 possible and build their own.

One notion we increasingly get asked about is digital twins. Typically they鈥檙e hard to create, hard to deliver, and even harder to anchor to reality with their physical counterpart. Why can鈥檛 creating them be turnkey?

It鈥檚 satisfying to answer those questions with OKO, because as a spatial web ecosystem, that鈥檚 one of the things it does really well. Being able to capture a real-world environment, anchor it, and bring it into Unreal to work on it, all in the space of five minutes still feels like magic to me and I work on it daily.

As you might expect then, we frequently use OKO to create digital twins for a range of purposes; from diverse environments and industries to projects for our clients, and even ourselves.

The internal ones are my favorite. For demos and internal meetings, we have two digital anchored recreations of our own offices. Built with OKO using Unreal, accessible via our Unity iOS client or browser-based web client and anchored to the real-world office in AR.

Any change anyone makes to the digital office, stays. Anyone can visit at any time and be co-present with anyone else. Even if one person is physically there and another is not. Cross-reality meetings are a hoot.

It often blows people's minds when they see it in action.

And what gets me really excited is that none of this is hidden 草莓视频在线 magic. None of this is gated away. It鈥檚 accessible to anyone with an OKO account. We鈥檝e taken everything we鈥檝e learnt from every mixed reality project we鈥檝e delivered and found a commonality that lets us, and now others, move faster.

I could rhapsodize all day about OKO and how awesome it is. But I鈥檓 not a salesperson, just someone who鈥檚 been fortunate enough to have caught a glimpse of the future. 

So, instead, . The spatial web is closer than you think.

]]>
From pages to spaces: the internet's evolution and a new era of interoperability
Designing Alo Moves XRJesse WarfieldThu, 26 Sep 2024 09:07:00 +0000/blog/designing-alo-moves-xr618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66e951ccfa37b033745f6d4cAlo Moves XR transforms how users experience yoga, Pilates, and mindfulness, making it feel like you're attending a personalized fitness class with a stunning view 鈥 all from the comfort of your home. Staying true to Alo Moves' mission of empowering people to live healthier and more fulfilling lives, this new XR experience continues that vision. Here's a behind-the-scenes look at how we collaborated with Alo Moves and Meta to create this groundbreaking fitness experience.

Partnering with Alo Moves

From the outset, our collaboration with Alo Moves and Meta centered on redefining the quality of existing fitness experiences. We aimed to create an immersive, mixed-reality environment that went beyond the typical 2D video or VR fitness experiences, transporting users while ensuring they maintained spatial awareness and fully engaged in their practice.

We experimented with various capture methodologies to determine the best way to present instructors within the headset. Our goal was to deliver a sense of presence and fidelity that made users feel like they were receiving a personalized class from a world-renowned Alo Moves instructor.

We tested stereo footage with the Red Komodo-X using a 3ality Beamsplitter, the Z-Cam Pro, and volumetric video. While stereo footage provided some sense of presence, the volumetric instructor was the clear winner, offering a truly amazing next-generation experience. We all experience the world spatially, there is no comparison to having the ability to practice with an instructor and walk around them or pause and rotate the three-dimensional mini-instructors to study a pose. 

Volumetric Instructors

Alo Moves worked closely with their instructors to design classes specifically for the Quest headset. Partnering with Metastage, a leading Volumetric Capture & XR production studio, we captured the instructors' classes using 106 cameras on their stage. 

This process produced volumetric assets that offer users a more lifelike and immersive experience by preserving natural movements and detail of the instructors while maintaining longer, uninterrupted sessions not generally associated with traditional volcap. 

image1.png
image2.png
image3.png

The experiences are streamed directly to the Quest 3 headset via a content delivery network (CDN), utilizing video compression techniques similar to those found in traditional video streaming services. 

This data stream is then assembled in the headset into a fully immersive volumetric asset. This method not only minimizes the load on the headset, keeping the experience lightweight and responsive, but also allows for frequent and seamless content updates. This is especially important for ensuring a smooth user experience, while maintaining the high-quality performance expected of modern applications.

The combination of immersive visuals and photorealistic 3D instructors 鈥 including a 鈥渕ini-instructor鈥 that can be repositioned for better pose reference 鈥 creates a truly first- of-its-kind training experience.

The Portals

A key feature of the Alo Moves XR experience is the inclusion of two distinct portal types, each designed to transport users into different virtual environments that enhance their workouts and meditative practices. 

The Environmental Portal that accompanies the fitness experiences, transports users to serene locations like Thailand and Spain, further enhancing immersion. Initially, we experimented with stereo footage and 3D elements to heighten realism, but ultimately chose an 8K photo sphere for its crystal-clear resolution and minimal resource draw. The photo sphere鈥檚 curved design, along with subtle animations as part of the鈥渨indow effect鈥, give the illusion of depth and parallax and make you feel like you are overlooking a vast vista.

The Mindfulness Portal was crafted to deliver a more introspective experience. In collaboration with Alo Moves meditation instructors, we developed a calming visual panoramic designed to soothe rather than distract. The scene incorporates seamlessly looping, randomly generated elements that create an ever-changing yet tranquil atmosphere as well as environmental lighting elements.

The serene scenes set the tone of the meditation and allows users to choose between watching the visuals or closing their eyes and simply listening to the experience. Mixed-reality was chosen to allow users to reposition the portal, whether sitting or lying down, so they can have total flexibility with their meditation practice.

Interactions and UI

Alo Moves XR was designed to be controller-free, with interactions optimized for hand-tracking. Drawing inspiration from real-life interactions, we aimed for an intuitive and accessible design. The hands first interface includes a white highlight indicator to show interactable elements, ensuring that interactions are easy but not prone to accidental activation. 

A first-time user video helps guide newcomers through the core interactions when they first launch the experience, ensuring a smooth and intuitive onboarding process. The user interface (UI) has undergone a significant update from Alomoves.com integrating features designed to take full advantage of the mixed-reality environment. Despite these enhancements, it retains the key functionalities and familiar design elements that long-time Alo Moves users are accustomed to, ensuring that both new and experienced users feel comfortable navigating the platform.

Personalization

Alo Moves is committed to promoting a healthy lifestyle, and personalization plays a crucial role in this. Rather than creating pressure to engage daily, our achievement system encourages long-term consistency, rewarding users for building sustainable habits over time. 

User class history is stored on our backend services, generating class recommendations based on usage, surveys, and favorites. The profile page displays overall activity; mindfulness versus fitness minutes; and allows users to filter class history by week, month, or year. These personalized features will help users build long term healthy habits.

Our collaboration with Alo Moves, Meta, and Metastage has opened up new possibilities in the wellness space, blending cutting-edge technology with fitness in ways that resonate with both seasoned practitioners and newcomers alike. Alo Moves XR expands the horizons of wellness experiences, making them more accessible, immersive, and engaging. As this platform continues to grow, we're excited to see how it redefines what it means to pursue fitness and mindfulness in a progressive, digitally connected world.

]]>
Designing Alo Moves XR
Meet the Magnopians: Roo MacNeill草莓视频在线Mon, 23 Sep 2024 08:25:00 +0000/blog/meet-the-magnopians-roo-macneill618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66bc8d5ab2c586749386c9ff

With over 11 years of experience in the industry, Roo has worked on everything from small-scale TV advertisements up to AAA game titles and visual effects for Oscar and BAFTA-winning titles. Highlights include Christopher Robin, Avengers Infinity War, and Transformers Rise of the Beasts. 

Born and raised in Inverness, Roo firmly believes that location should never be a barrier to success and that hidden talent and ambition exist in the most remote places, it just needs to be nurtured.


Tell us more about your role at 草莓视频在线

I鈥檓 a Lead Artist working on real-time projects. It's a role that includes a very hands-on approach to builds and creation, as well as team management and keeping our department tied in with all the other studio disciplines.

What attracted you to 草莓视频在线 in the first place?

After years of uncertainty in studios and roles where there was a constant fear of job loss or stagnant progress, 草莓视频在线 caught my eye. It looked super dynamic, career progression looked very possible, and the fast-paced change of projects and style really matched my interests. Throughout the interview process, I was drawn to the strong focus on exploring new techniques and technologies. The application process was a little longer than usual, but it's built to ensure the right people are in the right places for the long term! It ticked every box I had and having only been here for a few months, it already feels like home.

What made you decide to pursue a career in this field?

I had a free CD trial of  from a random magazine (which barely ran on the family PC hidden in a cupboard, as the internet was still pretty fresh and no one really knew what they were meant to do with it). With that, I discovered I could make sandcastles without having to go to the beach, and *boom*, that was me off to the races.

I have built my entire career in the film side of VFX. As cool as it's been to work on massive projects, I had hit a bit of a brick wall. I was in a spot where career progression had drastically slowed down in a system where you have to wait for the next one up to drop off the board rather than progressing because of your skillset. 

Mixed in with a very pigeonholed approach to tasking, and the wild instability of film contracting, I needed to make a change and move forward. It's very energizing to feel challenged again rather than copy-paste the same process over and over for another Marvel film.

Real-time is almost an entirely new field to me, and it's so nice after grinding away to find fresh new challenges to remind me why I got into all of this in the first place 鈥 to make awesome stuff that brings me joy.

What is the biggest lesson you鈥檝e learned in your career?

No matter the stress, the budget, the company, the project, the hardware, or the software, you need your team at the end of the day. That's where the real work is done. Invest as much time as you can into developing them, growing with them, and above all, developing a social work relationship with them. Make time for meetings and catch-ups no matter what your calendar looks like. Everyone has specialties and hidden knowledge. Budgets are always tight, timelines are always messy, and these are things that can be worked around, but if you burn an artist out, more often than not this can't be repaired.

What鈥檚 your favorite thing to do when you鈥檙e not working? 

Outside of work, I'm a massive photography fan. I spend the majority of my free time shooting and editing. I find it's a great way to keep the creativity up in a fun way on my own schedule, and it's a place to further develop skills and keep up with advancing software.

What鈥檚 your special skill?

The ability to find super tasty street food in countries I have never been to. I wish I could say it's from some exquisite palette or superpower of taste senses, but in reality, it's just from my deep binges on YouTube so I know what 鈥榲ibes鈥 to look for.

If you had unlimited resources and funding, what project or initiative would you launch?

I would love to create a full-length CG film staffed entirely by final-year students and self-learners, to capture the super-high creativity of those about to get started. It would give people the opportunity to experience what it's really like to work in a studio 鈥 something which I think is massively missing from the university and self-learning experience.

How do you approach challenges and setbacks?

On a case-by-case basis. You can't always plan for things breaking. The main thing I do is to try to approach each one with a positive mindset rather than complaining and fighting against an idea. As a Lead, if you go into a problem with a defeated mindset, how can you expect your team to be motivated? Vent your frustrations in a way that doesn't affect the team and get to work on finding a solution, it's where most of my learning comes from. Going to the pub is always good. 

What are you reading/listening to right now?

In a desperate attempt to remember all the Polish that I have forgotten, I'm watching shows I know with Polish dubbing to try to get immersed in the language again. It's the best place I have lived so far, I need more of that mountain cheese in my life.

What鈥檚 your tactic for surviving a zombie apocalypse?

I would try and get the zombies on my side, by grilling up and serving up Michelin star survivors to them. Everyone wants to keep a good chef around (even the undead) so if they aren鈥檛 trying to kill me, it鈥檚 gonna save me from a massive amount of cardio.

]]>
Meet the Magnopians: Roo MacNeill
Ben Grossmann, 草莓视频在线 CEO, speaks to A16Z Games on how video game tech is powering a revolution in Hollywood.ExternalWed, 28 Aug 2024 12:18:00 +0000https://a16zgames.substack.com/p/how-video-game-tech-is-powering-a618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66d84e8dadaaf9080b09b60b

From the Mandalorian to Fallout, games tech is changing tinseltown. Oscar-winning visual effects supervisor Ben Grossmann and virtual production developer Johnson Thomasson tell A16Z Games how.

Permalink

]]>Ben Grossmann, 草莓视频在线 CEO, speaks to A16Z Games on how video game tech is powering a revolution in Hollywood.The music is the gameplayDan TaylorThu, 15 Aug 2024 10:21:40 +0000/blog/the-music-is-the-gameplay618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66bdc038a11c7b20c32d75efHow to make an interactive rock concert as epic as possible.

As soon as we started work on bringing Metallica to Fortnite, we knew we wanted to push the boundaries of both interactivity and visual immersion. We wanted to take inspiration from previous Fortnite concerts like the visually explosive Travis Scott鈥檚 Astronomical and the deeply interactive The Kid LAROI鈥檚 Wild Dreams, and combine them to create insane visuals with satisfying interactivity.

Fast-forward a few months and we鈥檇 realize that we鈥檇 have to get seriously creative to fuse spectacle and gameplay into a seamless, musical experience. Here's what we learned in the process鈥

Get the full playthrough of the Metallica x Fortnite event in this rock-powered video .

Cognitive load & simplicity

The players鈥 ability to ingest and respond to what they鈥檙e seeing is a critical factor in any interactive experience; however, when that experience is synced to music, everything happens to the beat, and players are swept along with the tune, rather than having control of what happens when. We took a few crucial steps early on to help prevent overstimulation for players.

First off, we set out a clear structure where sections of the experience would alternate between heavy interaction and a visual celebration of the band.

The 鈥淪tadium鈥 sections were designed to recreate the feeling of being at Metallica鈥檚 world-famous live shows, providing players an opportunity to get right up next to the band and appreciate the painstakingly mocapped performance.

The first stadium section - where players could rock out to Hit The Lights.

Each segment was book-ended with a brief cut-scene to provide a narrative through-line so any changes in activity or location made sense. In retrospect, these probably could have been longer, as they were critical to the flow 鈥 something to bear in mind when designing the structure of your experience.

For our interactive sections, we made sure that the core gameplay would be familiar to Fortnite Battle Royale players 鈥 driving, jumping, shooting and so forth. 

During these segments, all key visual elements were placed directly in the players鈥 sight line, so they never had to miss the show while playing.

Band members warp in on the outside of a circular path, keeping them in the players鈥 sight-line.

And finally, we employed reductive design to focus on the essence of each section, to keep things as simple as possible.

The problem with creating simplicity is that it鈥檚 harder than you might think. Less content requires less design, right? Unfortunately, the usual path to a beautifully simple design is to make something really complicated, then slowly chip away the noise until the core experience shines through. 

For example, one of our high-level objectives for this experience was to recreate the physical chaos of the mosh pit using familiar Fortnite mechanics. We spent a lot of time experimenting with bumpers, flippers, d-launchers, impulse grenades, bounce pads, and more, each deployed in myriad combinations. We found a number of executions that really nailed the vibe and fun of the mosh pit, but somehow they took the user out of the concert experience. In the end a simple up-and-down bounce was all we needed.

Difficulty & pacing

It鈥檚 worth noting that the simplicity of a game is not necessarily linked to its difficulty.  Even a concept as elegantly simple as Tetris can be challenging for the player when the appropriate parameters are switched up. We needed to make sure that players of all skill levels could enjoy the experience on their first playthrough, and never fall out of step with the musical flow. 

We also wanted to change the gameplay with each new song. We had five songs to fit into a 10-minute experience, so we limited each song to around two minutes, which didn鈥檛 allow a lot of time for gameplay mechanics to develop.

In any interactive experience, players need to instantly understand what they have to do, so we deployed short, in-world text at the start of each section to make any objectives extremely clear: 鈥淕rind the lightning!鈥 鈥淎scend the clock tower!鈥 鈥淕et to the show!鈥. These simple instructions were tricky to get right because the less words you have, the more each of them matters. When possible, we made sure these objectives had a visual component in the level to make sure players knew exactly where to go.

Players need clear objectives, both written and visual, to provide context, motivation and direction.

To correctly pace the levels we had to break our core mechanics down into tunable metrics (the numerical parameters of gameplay).

For example, if players have a gap to jump, there are parameters you can tune like the width of the gap, the relative height of the landing or the size of the landing area, all of which will affect how difficult it is for players to successfully traverse that gap. Using these parameters it鈥檚 possible to define what easy, medium, and hard versions of your gameplay elements look like. 

We made sure that the first time a player encountered a mechanic, it was the easiest possible variant, and the section contained that singular mechanic only, reducing the cognitive load and creating a gentle onboarding for each element. This was most evident in 鈥淔or Whom The Bell Tolls鈥 where we introduced players to rhythmic steps, swinging bells, and rotating cogs each in their own, isolated sections, before bringing all those elements together for the most intense part of the song.

This parametric approach can also be used to map the gameplay to the music. For each track, we mapped out the various segments of gameplay directly over the top of the music鈥檚 waveform, making sure they matched in both intensity and theme.

The initial gameplay map for For Whom The Bell Tolls.

We also went to great pains to make sure that no matter a player鈥檚 skill, nobody got left behind. Different techniques were deployed for each section 鈥 in the race section for 鈥淟ux 脝terna鈥 we tuned down the boost on the cars so the difference in speed was mostly perceptual (shhh鈥 诲辞苍鈥檛 tell anyone!) For other sections, we created a dynamic respawn system that would revive fallen players at the optimum location relative to the music (this was less effort to implement than it sounds 鈥 a single respawn point, animated to move through the level in between beats 鈥 easy). We also broke larger sections of gameplay into smaller segments with their own discreet objectives; faster players would reach these first and chill with the visuals, or dance with the band, while slower players would have a chance to catch up, before everyone warped onto the next area together. 

Musical immersion

Perhaps most importantly, it was critical for us that Metallica鈥檚 music didn鈥檛 feel like a tacked-on soundtrack: players had to feel like they were playing the music. With this in mind, we embraced something we call Ludo-rhythmic Resonance, which has three key pillars: visual, spatial, and mechanical.

Visual is the whole game world pulsing to the beat. Do anything and everything you can to make the environment pump in time with the music. Assign different visual elements to different instruments: shake the screen on a crash cymbal, pump the FOV on the kick drum, explode lava when the vocals kick in, desaturate as you build up to a drop then pop the color back in when it hits. This way you鈥檒l create a visual language for the sound that players will subconsciously translate, immersing them in the music.

The volcanic racetrack of 鈥淟ux 脝terna鈥 has eight unique visual components, each mapped to various musical elements of the track.

The spatial element is all about the metrics of rhythm. Knowing your metrics is the keystone of quality level design, so we spent a good chunk of our pre-production building test levels, or 鈥榞yms鈥, where we could reverse engineer Fortnite鈥檚 metrics, and map them to the music. For example, in the jumping sections of 鈥淔or Whom The Bell Tolls鈥, the platforms are spaced so that players can run and jump in time with the music. In our racing section, assuming you are going full speed, the curves switch direction every four bars creating a rhythmic slalom. Stimulating a rhythmic input is another key tool for reinforcing that musical immersion.

And finally, the mechanical element refers to synchronizing as much gameplay as possible to the beat. The volcanic jets in our race track all fire in time with the power cords, the boss always attacks every eight bars, machine guns have a rate of fire that matches the track鈥檚 BPM, the bells in the level swing when the bells in the music swing. As luck would have it, in 鈥淟ux 脝terna鈥 the cars鈥 turbo was already tuned to the right tempo, which meant you could double-tap it to the beat for extra boost. And the more juice you can layer onto your mechanics, the more players will feel the beat. Once again, this all gets the player thinking and playing rhythmically to completely immerse them in the sound of Metallica鈥檚 music.

The Master of Puppets shoots his death rays in time with the music, while machine guns rattle to the beat.

Put it all together

The end result of these elements creates a seamless experience full of intense gameplay, where players always know what they are doing, never get split up, play in time to the beat, and get fully immersed in the music, while still having the chance to chill and just rock out with the band.

In the 鈥淓nter Sandman鈥 finale lights, fire, lava, lightning, speakers, camera, and d-launchers all fire to the beat!

Along the way, we had a lot of help and support from the good folks at Epic, most of whom had worked on concerts before and/or were Harmonix alumni, so big thanks to them for sharing their extensive expertise and experience!

Hopefully, you got a chance to play Metallica - Fuel. Fire. Fury. (or at least ) and experienced these design principles in action. We hope you'll agree that they make for a truly epic rock experience!

]]>
The music is the gameplay
Meet the Magnopians: Tunde Glover草莓视频在线Fri, 28 Jun 2024 10:04:40 +0000/blog/meet-the-magnopians-tunde-glover618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:6667043736e7b14f12ff11c1

Tunde Glover is an artist with over 20 years experience working in games, creating wonderful 3D worlds. He has worked on quite a few different titles across many genres, with highlights such as Alien: Isolation, Halo Wars 2, and Shogun 2: Total War.
As a quarter century 3DsMax and Photoshop user, he's still fond of the original bright user interface from 2 decades ago! He has enjoyed and learned a great deal from the many talented people he's crossed paths with throughout his career.


Tell us more about your role at 草莓视频在线

Hi! At 草莓视频在线 I serve as one of the Lead Artists working on or supporting various projects throughout the studio. While delivering top-quality work for our clients is our primary focus, 草莓视频在线 is a very people-focused studio, so ensuring my team of artists is well-supported and guided is one of my most important mandates. As such, beyond creating and guiding project work, every day without exception I always make time for my art team. Even while navigating other responsibilities such as scheduling, general leadership meetings, and other company initiatives.

You鈥檝e been at 草莓视频在线 for a few months now, but what attracted you to 草莓视频在线 in the first place?

Initially, my growing interest in novel XR experiences, especially with all the new tech dev in the space, made me quite curious about who was doing all this cool work. I had heard the name 鈥槻葺悠翟谙哜 a few times.

When I was approached about the idea of creating with 草莓视频在线, I delved deeper into exploring what could be possible and I began to get excited. It was upon watching videos about 草莓视频在线 and hearing the CEO, Ben Grossmann, and Sol Rogers (Global Director of Innovation) talk about some of this stuff, that I became captivated by the prospect. As things drifted closer, I reached out to an old friend who鈥檇 been here for quite a while. It seemed he couldn鈥檛 be happier with the years he鈥檇 spent at 草莓视频在线, and I very much trust his judgment.

What made you decide to pursue a career in this field?

Much of the above, truly. It sounds all dreamy but it鈥檚 pretty much accurate! I鈥檝e spent over 20 years making games, and honestly, the potential for experiences feels so much broader with XR compared to what traditional games have offered so far. There are far less preconceptions.

I had always been interested in forging an art-based career, I just didn鈥檛 really know how to make it work. The starving artist stereotype wasn鈥檛 that attractive. However, having a keen interest in interactive experiences, I sort of put 2 & 2 together and I ended up looking into game dev.

The precise moment it all 鈥榗licked鈥 was while building something in 鈥楳acromedia Director鈥 (top marks for anyone who remembers this) about a quarter century ago. It dawned on me at the time that what I was actually making was a 鈥榞ame鈥.

What鈥檚 your favourite thing to do when you鈥檙e not working?

Spending time with family, especially playing (and now finally) talking with my boy. Working out, and of course, playing a lot of games. When someone figures out how to do 鈥榣ife鈥 at 1.5 times speed, I might find time for the large backlog of TV shows, books, and even graphic novels I need to catch up on.

If you could have any other job in the world, what would it be?

As weird as this sounds, landscaping! Specifically Indian sandstone settings. I find the idea of creating a semi-permanent jigsaw puzzle in someone鈥檚 garden really appealing. Especially if the garden or space has super weird shapes!

Where would you most like to travel to in the world?

I鈥檓 very much a homebody and not really a huge traveller, but I鈥檇 really like to visit one of those US diners in some rural town you tend to see in all of those TV shows. Where there鈥檚 some famous local speciality that everyone swears tastes incredible. Food that is both massive in size and rich in flavour. Even if it took me six months to burn off the calories, it might be worth it.

How do you want to leave a mark on the world 鈥 personally or professionally?

Earlier in life I was very keen on creating something like the defining game of the decade or inventing a whole new genre of some sort. While I still have some of these aspirations, now my main focus is trying to ensure my son has a really good life. While it sounds like the typical answer, it鈥檚 very much because I like to over-prepare for everything to try and ensure success. The thing is, there seems to be absolutely nothing you can do to prepare yourself to successfully raise a child. It鈥檚 like 鈥榯ry hard, all the time, forever鈥 is pretty much all there is, haha. A fun challenge all the same.

If you were on a gameshow, what would be your specialist subject?

My subject would be memorable one-liners from 90's movies - a personal favourite is from Back to the Future, "Roads? Where we're going, we don't need roads."

What are you reading/listening to/watching right now?

I鈥檓 still trying to finish Stephen Baxter鈥檚 Time鈥檚 Tapestry series after something like 10 years now... So much so, I鈥檇 probably need to start over again. It鈥檚 especially sad since I love his books, just seemingly not enough to read them anymore! Time really has a way of getting away from you, which is pretty funny considering what I was reading. Naturally, it鈥檚 all the internet's fault, and I have no agency at all.

What鈥檚 your life motto/ guiding principle you live your life by?

鈥淭ry hard, all the time鈥, seems to work pretty well so far 馃槀

]]>
Meet the Magnopians: Tunde Glover
How a Virtual Art Department (VAD) contributed to the Fallout TV show草莓视频在线Wed, 12 Jun 2024 20:12:30 +0000/blog/how-a-virtual-art-department-vad-helped-create-the-fallout-tv-show618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:666823a23ff8714b83612749The Virtual Art Department (VAD) is increasingly becoming a standard in TV and filmmaking as it can provide great creative flexibility and efficiency over traditional methods. Here's a look at how a Virtual Art Department works and our experience of operating one for Fallout, the post-apocalyptic series produced by Kilter Films for Amazon Prime Video. 

What is a Virtual Art Department? 

A Virtual Art Department (VAD) is a team that uses digital technology to create virtual environments for film and television. Unlike traditional art departments that build physical sets and props, a VAD works with 3D modeling software and real-time game engines, like Unreal Engine, to design and visualize these elements in a virtual space. 

The Importance of the VAD

By leveraging real-time technologies, VADs enable filmmakers to visualize and iterate on scenes live, making the creative process more flexible and efficient. This real-time feedback loop allows directors, cinematographers, and production designers to see and adjust digital environments and assets on the fly, ensuring a seamless blend with live-action footage. On the day of shooting, these elements are projected onto an LED wall where they can react to novel camera positions. Hence the phrase, in-camera visual effects (ICVFX).

Roles within our Virtual Art Department

A VAD brings together people with diverse skills to make the magic happen. On Fallout we used the following roles: 

  • VAD supervisor: Oversees the artistic vision and ensures it aligns with the overall production design.

  • VAD Lead/Virtual Gaffer: Responsible for managing digital lighting setups and ensuring the virtual scenes are lit to achieve the desired artistic and realistic effects, while seamlessly matching on-set lighting.

  • VAD Lighting & Look Dev Supervisor: Oversees the overall visual aesthetics, including lighting and material properties, to ensure consistency and quality across all digital assets.

  • VAD Environment Lead: Manages the design, modeling, texturing, and rendering of virtual landscapes, sets, and backgrounds to ensure they align with the artistic vision and technical requirements.

  • VAD 2D Art Director: Responsible for the visual style and quality of all 2D artwork. This includes overseeing concept art, matte paintings, and other 2D elements.

  • VAD Matte Painter: Creates detailed and realistic digital backgrounds and environments that seamlessly integrate into virtual scenes. They paint 2D images or composite photographic elements to achieve the desired look, enhancing the visual depth and atmosphere of scenes.

  • VAD Artist: Creates and develops various digital assets, including 3D models, textures, and environments. Collaborates with other team members to ensure that digital assets align with the overall artistic vision and technical specifications, contributing to the overall visual storytelling of the production.

  • VAD Technical Artist: While both VAD Artists and VAD Technical Artists create digital assets, the latter focuses more on ensuring that these assets are efficiently created, integrated, and perform well within the virtual environment. Technical Artists optimize workflows, develop tools, troubleshoot issues, and ensure that digital assets are efficiently created and integrated without compromising performance or quality.

  • VAD Animation & Rigging: Responsible for creating character and object animations and developing the underlying skeletal structures (rigs) that enable those animations to be applied efficiently and realistically within the virtual environment.

The 草莓视频在线 team on set for Fallout. 

How the VAD was used on Fallout 

Script Breakdown and Environment Design

One of the first things we did was provide creative input for leveraging virtual production across the entire show. At the time, only the pilot had been written and our involvement on the project was much like a creative department head. We worked with the filmmakers to break down the scripts into scenes and environments that would benefit the most from in-camera visual effects and other virtual production techniques. They settled on key environments such as the picnic area and vault door scenes in Vault 33, the cafeteria in Vault 4, and the New California Republic鈥檚 base inside the Griffith Observatory. Additionally, any scenes involving the Vertibird were earmarked for LED process shots.

Virtual Set Construction

The VAD built all virtual sets entirely inside Unreal Engine. This approach allowed the filmmakers to use Unreal鈥檚 suite of virtual production tools throughout the creative process. Virtual scouting tools enabled the filmmakers to perform tech scouts in VR, block out action, and place cameras and characters in precise locations. This meticulous planning was crucial for creating a heatmap of the environment, helping to focus creative efforts on the most critical areas and optimize resources effectively.

Real-time Modifications and Flexibility

Working in real-time 3D was essential for the production. Unreal Engine offered the flexibility to make creative modifications to the set during pre-production and even on shooting days. For instance, during the shooting of the Vault Door set, the idea to dynamically change the lighting mid-shot was implemented swiftly by the VAD, showcasing the agility and responsiveness that real-time tools provided.

In this pivotal scene, Lucy leaves the Vault for the first time. Authenticity for this moment is key 鈥 and finding the right blend between physical and virtual provides the audience with access to that authenticity. To that end, much of the set was physical 鈥 the handrails, the walkway, the plank, the control mechanisms, and even the door itself. Meanwhile, the interior walls of the vault were rendered virtually, extending the scope of the set. To make room for the giant moving vault door, we offset that set piece from the LED wall itself and filled the gap with floating LED panels instead. These were giant wild walls the crew could use to position at any point in the set to accommodate extreme angles. Mounting these panels with motion trackers allowed the team to dynamically update the image on screen no matter where they were placed 鈥 effectively creating a moving window into the virtual world. This streamlined production by eliminating the need for the crew to search for expansive caverns or construct large set pieces on a stage, all without sacrificing visual impact.

Seamless Integration with Physical Sets

A significant aspect of the VAD's work involved coordinating with the physical art department to ensure a seamless blend between physical and virtual elements. This included building 1:1 scale 3D versions of a number of sets in Unreal, even those that did not necessarily plan on using an LED volume stage. This work allowed the filmmakers to visualize scenes, compose shots, inform every department on the show about the creative intent, and help anticipate any potential difficulties. For scenes that did intend to use the LED volume stage, this close collaboration ensured that the digital set matched the physical set pieces, maintaining consistency in light and color, materials, and design.

We faced a unique challenge with this environment from episode 1. There is a cornfield inside of Vault 32 where most community activities take place, including Lucy鈥檚 wedding. The scene features layers of practical corn that extend into the virtual set, as well as apple trees, distant hallways leading to the neighboring vaults, and a synthetic nuclear-powered projection of a bucolic farm setting. The result is a sort of odd twist on LED volume production. The story calls for an in-world projection of virtual imagery onto a giant wall, meanwhile, as crew members on set, we are looking at that very same imagery projected onto an LED wall. At times, it really did feel as if we were inside the world of Fallout. This particular environment required multiple layers of virtual corn, six different lighting setups 鈥 as well as dynamic hooks to enable lighting cues in real-time 鈥 and a virtual projection surface capable of playing back 8k image sequences at 24fps. 

On-set Workflow and Collaboration

On set, the VAD worked closely with multiple departments to ensure the highest fidelity for final imagery. While on-set Virtual Production Unreal Operators maintained camera tracking, adjusted frustums, loaded and unloaded scenes, ensured the proper functioning of the LED wall, and wrangled shot data, the VAD team could come in and adjust the look of the scene in real-time between shots. This allowed the VAD to operate much like a traditional on-set Art Department 鈥 responding to the ever-changing needs of production. This close collaboration was crucial in achieving the desired visual effects and maintaining the creative vision of the show.

Challenges

One of the biggest challenges faced by the VAD was shooting on 35mm film within an LED volume, a relatively unprecedented approach. This required extensive testing and fine-tuning of genlock, color calibration, and exposure settings. Despite these challenges, the VAD's expertise allowed for the successful capture of final pixels on set, minimizing the need for extensive post-production VFX work.

While there were challenges, the VAD we put together for Fallout worked because of meticulous planning and the seamless integration of virtual and physical elements. The ability to make real-time adjustments and collaborate closely with various departments ensured that the show maintained its unique visual style. The innovative use of Unreal Engine and other virtual production tools not only enhanced the storytelling but also set new benchmarks for virtual production workflows in the television industry.

]]>
How a Virtual Art Department (VAD) contributed to the Fallout TV show
Director of Virtual Production, AJ Sciutto, speaks to Women's Wear Daily about the groundbreaking development of Alo Moves XR.ExternalThu, 30 May 2024 11:32:00 +0000https://wwd.com/business-news/business-features/alo-moves-yoga-classes-meta-quest-magnopus-mr-1236406266/618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66d87b0776f82356b385b96e

Alo Moves XR for Meta Quest 3 breaks new ground as the first fitness app employing volumetric captured 3D classes. This technology provides lifelike instructors to guide users seamlessly through every movement. The app employs room mapping and object detection enabling users to practice mindfulness, relaxation, yoga, and Pilates safely and comfortably in a virtual studio.

AJ Sciutto speaks to Women鈥檚 Wear Daily to uncover how we leveraged cutting-edge MR technology to create a truly immersive wellness experience.

Permalink

]]>Director of Virtual Production, AJ Sciutto, speaks to Women's Wear Daily about the groundbreaking development of Alo Moves XR.Meet the Magnopians: Daksh Sahni草莓视频在线Fri, 24 May 2024 10:01:24 +0000/blog/meet-the-magnopians-daksh-sahni618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66505b812655f907d3e85ee7

Daksh Sahni, Senior Product Manager at 草莓视频在线, is a successful leader with 16 years in AR/VR and game development. His experience includes contributing to some of the world鈥檚 best-selling AAA games at Activision, the first FDA-approved Virtual Reality therapeutic product, and working on AR hardware at Samsung Research of America. 


Tell us more about your role at 草莓视频在线

I lead the Product Solutions team, working on our technologies and CSP. The team operates at the intersection of business development, technology feature development, and user experience design. We interface with potential customers and clients by providing product demos and onboarding sessions. We constantly prototype new cutting-edge product use cases. Additionally, we serve as the voice of the user within the product ecosystem. By collaborating closely with other users across the studio and third-party entities, we gather feedback and translate it into actionable insights, creating UX flows and writing product requirements.

For those who 诲辞苍鈥檛 know, can you briefly explain what OKO is?

OKO enables users to easily create and share cross-reality experiences 鈥 connecting people, places, and things across physical and digital worlds. 

OKO is a suite of apps and plugins that operate across game engines, connected to a comprehensive set of cloud services, accessible across many devices. It鈥檚 built on the open-source Connected Spaces Platform (and as many other open-source and industry standards as possible).

What moment has had the most significant impact on your life?

The most impactful moment of my professional life occurred when I made a significant career pivot from traditional architecture to game development. While attending the graduate program at UCLA School of Architecture, I seized an opportunity to intern at a video game startup in Santa Monica. My role involved conducting architectural research to create a digital twin of the City of Los Angeles for a project intended to rival the mega-blockbuster Grand Theft Auto (GTA). This opportunity proved to be transformative, ultimately resulting in the acquisition of the studio by Activision. From that point on, there was no turning back.

What is the biggest lesson you鈥檝e learned in your career?

There are too many. But from my experience, I've noticed that while companies and projects may change over time, cultivating relationships and friendships with the people you collaborate with daily can have a lasting impact.

What鈥檚 your favorite thing to do when you鈥檙e not working? 

Hiking / walking /  being in nature. Appreciating art/design. Hanging out with friends.

If you had unlimited resources and funding, what project or initiative would you launch?

I'm already working on a version of the product I would launch! 馃榾 As an immigrant in this country with family all over the world, I deeply crave a connection to the people and places that have shaped my life and memories. While nothing can replace real experiences, the ability to use technology to connect and meaningfully engage with diverse cultures, people, and places is something I am deeply passionate about.

If you could wake up in the body of another person (just for one day) who would it be and why?

Without getting into the politics of it -  I鈥檇 love to be the person who can influence the end of these crazy wars !

How would your friends describe you?

Loyal, honest, reliable. 

]]>
Meet the Magnopians: Daksh Sahni
How we wrote a GPU-based Gaussian Splats viewer in Unreal with NiagaraAlessio RegalbutoWed, 15 May 2024 13:50:09 +0000/blog/how-we-wrote-a-gpu-based-gaussian-splats-viewer-in-unreal-with-niagara618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:6644bd914d6ac953d2ad8444In this article, I want to share our journey of writing a fully functional Gaussian Splat viewer for Unreal Engine 5, starting right from the ground up.

Getting the ball rolling

First of all, let鈥檚 quickly recap what Gaussian Splatting is. In short, it鈥檚 a process that produces something similar to a point cloud, where instead of each point, a colored elliptic shape is used. This changes and stretches, depending on the camera position and perspective to blend into a continuous space representation. This helps to keep visual information such as reflections and light shades intact in the captured digital twin, retaining details as realistically as possible.

For more info, feel free to check out my previous articles:

The first challenge was to understand specifically what format a Gaussian Splat file uses, and which one is the most commonly accepted by the industry. After in-depth research, we identified two main formats that are currently popular: .ply and .splat.

After some consideration, we chose the .ply format as it covered a wider range of applications. This decision was also driven by looking at other tools such as , which allows importing Gaussian Splats in the form of .ply files only, even if it also offers to export them as .splat files.

What does a .PLY file look like?

There are two different types of .ply files to start with:

  • ASCII based ply files, which store data in textual form.

  • Binary based ply files, which are less readable.

We can think of a ply file as a very flexible format for specifying a set of points and their attributes, which has a bunch of properties defined in its header. With those, it instructs the parser on how the data contained in its body should be interpreted. For reference, is a very informative guide on the generic structure of .ply files.

Here is an example of what a typical Gaussian Splat .ply file looks like:

ply
format binary_little_endian 1.0
element vertex 1534456
property float x
property float y
property float z
property float nx
property float ny
property float nz
property float f_dc_0
property float f_dc_1
property float f_dc_2
property float f_rest_0
(... f_rest from 1 to  43...)
property float f_rest_44
property float opacity
property float scale_0
property float scale_1
property float scale_2
property float rot_0
property float rot_1
property float rot_2
property float rot_3
end_header
  • The first line ensures this is a ply file. 

  • The second line establishes if the format of the data stored after the header is ASCII based or binary based (the latter in this example).

  • The third line tells the parser how many elements the file contains. In our example, we have 1534456 elements, i.e. splats.

  • From the fourth line until the 鈥渆nd_header鈥 line, the entire structure of each element is described as a set of properties, each with its own data type and name. The order of these properties is commonly followed by most of the Gaussian splat .ply files. It is worth noting that regardless of the order, the important rule is that all the non-optional ones are defined in the file, and the data follows the declared structure.

Once the header section ends, the data to be parsed for each element is provided by the ply body. Each element after the header needs to respect imperatively the order that was declared in the header to be parsed correctly.

This can give you an idea of what to expect specifically when we want to describe a single Gaussian Splat element loaded from a ply file:

  • A position in space in the form XYZ (x, y, z);

  • [Optional] Normal vectors (nx, ny, nz);

  • Zero order Spherical Harmonics (f_dc_0, f_dc_1, f_dc_2), which dictate what color the single splat should have by using a specific mathematical formula to extract the output RGB value for the rendering;

  • [Optional] Higher order Spherical Harmonics (from f_rest_0 to f_rest_44), which dictate how the color of the splat should change depending on the camera position. This is basically to improve realism for the reflections or lighting information embedded into the Gaussian splat. It is worth noting that this information is optional, and that files that embed it will be a lot larger than zero-order-only based ones;

  • An opacity (opacity), which establishes the transparency of the splat;

  • A scale in the form XYZ (scale_0, scale_1, scale_2);

  • An orientation in space in the quaternion format WXYZ (rot_0, rot_1, rot_2, rot_3).

All this information has its own coordinate system, which needs to be converted into Unreal Engine once loaded. This will be covered in more detail later in this article.

Now that you are familiar with the data we need to deal with, you are ready for the next step.

Parsing a .PLY file into Unreal

For our implementation, we wanted to support both ASCII and binary ply files, so we needed a way to quickly parse their data and store them accordingly. Luckily, ply files are not new. They have been used for 3D models for a long time, even before Gaussian Splats became popular. Therefore, several .ply parsers exist on GitHub and can be used for this purpose. We decided to adapt the implementation of , a general purpose open source header-only ply parser written in C++ (big kudos to the author ).

Starting from the implementation of Happly, we adapted its parsing capabilities to the coding standard of Unreal and ported it into the game engine, being mindful of the custom garbage collection and data types expected by Unreal. We then adapted our parsing code to align with the previous Gaussian Splat structure.

The next logical step, once we knew how the data looked and how to read it from a file, was to store it somewhere. This meant we needed a class or a struct that could hold all this data for a specific lifetime within the Engine. Time to dig into some C++ code!

How could we define a single Gaussian Splat in Unreal?

The easiest way to store each Gaussian Splat data was to define a custom USTRUCT in Unreal, optionally accessible by Blueprints, implemented along the following lines:

/**
 * Represents parsed data for a single splat, loaded from a regular PLY file.
 */
USTRUCT(BlueprintType)
struct FGaussianSplatData
{

GENERATED_BODY()

// Splat position (x, y, z)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
FVector Position;

// Normal vectors [optional] (nx, ny, nz)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
FVector Normal;

// Splat orientation coming as wxyz from PLY (rot_0, rot_1, rot_2, rot_3)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
FQuat Orientation;

// Splat scale (scale_0, scale_1, scale_2)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
FVector Scale;

// Splat opacity (opacity)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
float Opacity;

// Spherical Harmonics coefficients - Zero order (f_dc_0, f_dc_1, f_dc_2)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
FVector ZeroOrderHarmonicsCoefficients;

// Spherical Harmonics coefficients - High order (f_rest_0, ..., f_rest_44)
UPROPERTY(EditAnywhere, BlueprintReadWrite)
TArray<FVector> HighOrderHarmonicsCoefficients;

FGaussianSplatData()
: Position(FVector::ZeroVector)
	, Normal(FVector::ZeroVector)
	, Orientation(FQuat::Identity)
	, Scale(FVector::OneVector)
	, Opacity(0)
	{
	}
};

One instance per splat of this struct was generated during the parsing phase and added to a TArray of splats to use this data for our visualization in the following steps.

Now that we have the core data, let鈥檚 dive into the most challenging and fun part: transferring the data to the GPU so a Niagara system can read it!

Why Niagara for Gaussian Splats?

Niagara is the perfect candidate to represent particles inside Unreal. Specifically, a Niagara system is made up of one or multiple Niagara emitters, which are responsible for spawning particles and updating their states every frame.

In our specific case, we will use a single Niagara emitter to make a basic implementation. As an example, we will call it 鈥GaussianSplatViewer

Now that we have our new shiny emitter, we need a way to 鈥減ass鈥 the splats鈥 data into it, so that for each splat we can spawn a relative point in space, representing it. You might wonder, is there anything in Unreal we could use out of the box to do that for us? The answer is yes, and it is called the 鈥淣iagara Data Interface (NDI)鈥.

What is a Niagara Data Interface (NDI) and how to write one

Imagine you want to tell the Niagara emitter, 鈥淗ey, I have a bunch of points I read from a file that I want to show as particles. How can I make you understand what position each point should be in?鈥 Niagara would reply, 鈥淢ake me a beautiful NDI that I can use to understand your data and then retrieve the position for each particle from it鈥.

You might wonder, how do I write this NDI and what documentation can I find? The answer is simple: most of the Engine source code uses an NDI for custom particle systems, and they鈥檙e an excellent source of inspiration for building your own! The one we took the most inspiration from was the 鈥UNiagaraDataInterfaceAudioOscilloscope鈥.

Here鈥檚 how we decided to structure a custom NDI to make each splat 鈥渦nderstandable鈥 by Niagara when passing it through. Keep in mind that this class will hold the list of Gaussian Splats we loaded from the PLY file so that we can access their data from it and convert it into Niagara-compatible data types for use within the particles.

Firstly, we want our NDI class to inherit from UNiagaraDataInterface, which is the interface a Niagara system expects to treat custom data types via NDI. To fully implement this interface, we needed to override several functions, which I present below.

GetFunctions override

When overriding this function, we are telling Niagara 鈥淚 want you to see a list of functions I am defining, so that I can use them inside your Niagara modules鈥. This instructs the system to know what input and output each of these functions should expect, the name of the function, and if it鈥檚 static or non-static.

// Define the functions we want to expose to the Niagara system from
// our NDI. For example, we define one to get the position from a
// Gaussian Splat data.
virtual void GetFunctions(TArray<FNiagaraFunctionSignature>& OutFunctions) override;

Here is a sample implementation of GetFunctions, which defines a function GetSplatPosition to the Niagara system using this NDI. We want GetSplatPosition to have exactly 2 inputs and 1 output:

  • An input that references the NDI that holds the Gaussian splats array (required to access the splats data through that NDI from a Niagara system scratch pad module);

  • An input of type integer to understand which of the splats we request the position of (this will match a particle ID from the Niagara emitter, so that each particle maps the position of a specific Gaussian splat);

  • An output of type Vector3 that gives back the position XYZ of the desired Gaussian splat, identified by the provided input Index.

void UGaussianSplatNiagaraDataInterface::GetFunctions(
    TArray<FNiagaraFunctionSignature>& OutFunctions)
{   
   // Retrieve particle position reading it from our splats by index
   FNiagaraFunctionSignature Sig;
   Sig.Name = TEXT("GetSplatPosition");
   Sig.Inputs.Add(FNiagaraVariable(FNiagaraTypeDefinition(GetClass()),
       TEXT("GaussianSplatNDI")));
   Sig.Inputs.Add(FNiagaraVariable(FNiagaraTypeDefinition::GetIntDef(),
       TEXT("Index")));
   Sig.Outputs.Add(FNiagaraVariable(FNiagaraTypeDefinition::GetVec3Def(),
       TEXT("Position")));
   Sig.bMemberFunction = true;
   Sig.bRequiresContext = false;
   OutFunctions.Add(Sig);
}

Similarly, we will also define other functions inside GetFunctions to retrieve the scale, orientation, opacity, spherical harmonics, and particle count of our Gaussian splats. Each particle will use this information to change shape, color, and aspect in space accordingly.

GetVMExternalFunction override

This override is necessary to allow Niagara to use the functions we declared in GetFunctions by using Niagara nodes so that they become available within Niagara graphs and scratch pad modules. This combines with the DEFINE_NDI_DIRECT_FUNC_BINDER macro in Unreal designed for this purpose. Following is an example of the GetSplatPosition function definition.

// We bind the following function for use within the Niagara system graph
DEFINE_NDI_DIRECT_FUNC_BINDER(UGaussianSplatNiagaraDataInterface, GetSplatPosition);


void UGaussianSplatNiagaraDataInterface::GetVMExternalFunction(const FVMExternalFunctionBindingInfo& BindingInfo, void* InstanceData, FVMExternalFunction& OutFunc)
{
   if(BindingInfo.Name == *GetPositionFunctionName)
   {
       NDI_FUNC_BINDER(UGaussianSplatNiagaraDataInterface,
         GetSplatPosition)::Bind(this, OutFunc);
   }
}


// Function defined for CPU use, understandable by Niagara
void UGaussianSplatNiagaraDataInterface::GetSplatPosition(
  FVectorVMExternalFunctionContext& Context) const
{
   // Input is the NDI and Index of the particle
   VectorVM::FUserPtrHandler<UGaussianSplatNiagaraDataInterface> 
     InstData(Context);


   FNDIInputParam<int32> IndexParam(Context);
  
   // Output Position
   FNDIOutputParam<float> OutPosX(Context);
   FNDIOutputParam<float> OutPosY(Context);
   FNDIOutputParam<float> OutPosZ(Context);


   const auto InstancesCount = Context.GetNumInstances();


   for(int32 i = 0; i < InstancesCount; ++i)
   {
       const int32 Index = IndexParam.GetAndAdvance();


       if(Splats.IsValidIndex(Index))
       {
           const auto& Splat = Splats[Index];
           OutPosX.SetAndAdvance(Splat.Position.X);
           OutPosY.SetAndAdvance(Splat.Position.Y);
           OutPosZ.SetAndAdvance(Splat.Position.Z);
       }
       else
       {
           OutPosX.SetAndAdvance(0.0f);
           OutPosY.SetAndAdvance(0.0f);
           OutPosZ.SetAndAdvance(0.0f);
       }
   }
}

Note that the definition of GetSplatPosition is implemented to make this NDI CPU compatible.

Copy and Equals override

We also need to override these functions, so that when we copy or compare an NDI that uses our class, Niagara will understand how to perform these operations. Specifically, we instruct the engine to copy the list of Gaussian Splats when one NDI is copied into a new one, and to establish if two NDIs are the same if they have the same exact Gaussian Splats data.

virtual bool CopyToInternal(UNiagaraDataInterface* Destination) const override;
virtual bool Equals(const UNiagaraDataInterface* Other) const override;

This function is required to let the Niagara system understand if our NDI functions need to be executed on the CPU or on the GPU. In our case, initially we wanted it to work on the CPU for debugging, but for the final version we changed it to target the GPU instead. I will explain this choice further later.

virtual bool CanExecuteOnTarget(ENiagaraSimTarget Target) const override { return Target == ENiagaraSimTarget::GPUComputeSim; }

Additional overrides required for our NDI to work on the GPU too

We also need to override the following functions, so that we can instruct Niagara on how our data will be stored on the GPU (for a GPU compatible implementation) and how the functions we declared will be mapped onto the GPU via HLSL shader code. More on this later.

// HLSL definitions for GPU
virtual void GetParameterDefinitionHLSL(const FNiagaraDataInterfaceGPUParamInfo& ParamInfo, FString& OutHLSL) override;


virtual bool GetFunctionHLSL(const FNiagaraDataInterfaceGPUParamInfo& ParamInfo, const FNiagaraDataInterfaceGeneratedFunction& FunctionInfo, int FunctionInstanceIndex, FString& OutHLSL) override;


virtual bool UseLegacyShaderBindings() const override { return false; }


virtual void BuildShaderParameters(FNiagaraShaderParametersBuilder& ShaderParametersBuilder) const override;


virtual void SetShaderParameters(const FNiagaraDataInterfaceSetShaderParametersContext& Context) const override;

CPU vs GPU based Niagara system

Each emitter of a Niagara particle system can work on the CPU or on the GPU. It鈥檚 very important to establish which of the two to choose, because each of them has side effects.

Initially, for a simple implementation, we went for the CPU based Niagara emitter. This was to make sure that the splat data and coordinates were correctly reproduced in terms of position, orientation, and scale inside the Niagara system.

However, there are some important limitations for CPU based emitters: 

  • They cannot spawn more than 100K particles;

  • They rely only on the CPU, which means they might consume additional time taking it away from other scripts鈥 execution every frame, resulting in lower frame rates especially when dealing with the maximum amount of supported particles;

  • GPUs can handle much better than CPUs. This makes GPUs better suited than CPUs to large volumes of particles.

While it makes sense for debugging to accept the CPU 100K particle limits, it鈥檚 definitely not the right setup to scale up, especially when you want to support bigger Gaussian Splats files that may contain millions of particles.

In a second iteration, we decided to switch to a GPU based emitter. This not only relies on the GPU completely without affecting the CPU but can support up to 2 million particles spawned, which is 20x more than what is supported on the CPU.

The side effect of executing on the GPU is that we also needed to take care of GPU resource allocations and management, requiring us to get dirty with HLSL shader code and data conversion between CPU and GPU.

How? You guessed it, by extending our beautiful custom NDI.

From PLY file to the GPU via the NDI

Thanks to our custom NDI, we have full control over how our data is stored in memory and how it is converted into a Niagara compatible form. The challenge now is to implement this via code. For simplicity, let鈥檚 break our goal down into two parts:

  1. Allocate memory on the GPU to hold Gaussian Splat data coming from the CPU.

  2. Transfer Gaussian Splat data from the CPU to the prepared GPU memory.

Prepare the GPU memory to hold Gaussian Splat data

The first thing to be aware of is that we cannot use Unreal data types like TArray (which holds the list of Gaussian Splats in our NDI) when we define data on the GPU. This is because TArray is designed for CPU use and is stored in CPU-side RAM, which is only accessible by the CPU. Instead, the GPU has its own separate memory (VRAM) and requires specific types of data structures to optimize access, speed, and efficiency.

To store collections of data on the GPU, we needed to use GPU buffers. There are different types available:

  • Vertex Buffers: store vertex such as positions, normals, and texture coordinates;

  • Index Buffers: used to tell the GPU the order in which vertices should be processed to form primitives;

  • Constant Buffers: store values such as transformation matrices and material properties that remain constant for many operations across the rendering of a frame;

  • Structured Buffers and Shader Storage Buffers: more flexible as they can store a wide array of data types, suitable for complex operations.

In our case, I decided to follow a simple implementation, where each Gaussian Splat information is stored in a specific buffer (i.e. a positions buffer, a scales buffer, an orientations buffer, and a buffer for spherical harmonics and opacity).

Note that both buffers and textures are equally valid data structures to consider for splat data on the GPU. We elected for buffers as we felt the implementation was more readable, while also avoiding an issue with the texture-based approach where the last row of pixels was often not entirely full.

To declare these buffers in Unreal, we needed to add the definition for a 鈥Shader parameter struct鈥, which uses an Unreal Engine Macro to tell the engine this is a data structure supported by HLSL shaders (hence supported by GPU operations). Here is an example:

BEGIN_SHADER_PARAMETER_STRUCT(FGaussianSplatShaderParameters, )
   SHADER_PARAMETER(int, SplatsCount)
   SHADER_PARAMETER(FVector3f, GlobalTint)
   SHADER_PARAMETER_SRV(Buffer<float4>, Positions)
   SHADER_PARAMETER_SRV(Buffer<float4>, Scales)
   SHADER_PARAMETER_SRV(Buffer<float4>, Orientations)
   SHADER_PARAMETER_SRV(Buffer<float4>, SHZeroCoeffsAndOpacity)
END_SHADER_PARAMETER_STRUCT()

It is worth noting that these buffers can be further optimized since the W coordinate remains unused by position and scales (they only need XYZ). To improve their memory footprint it would be ideal to adopt channel packing techniques, which are out of the scope of this article. It is also possible to use half precision instead of full floats for further optimization.

Before the buffers we also define an integer to keep track of the splats we need to process (SplatsCount), and a GlobalTint vector, which is an RGB value that we can use to change the tint of the Gaussian Splats. This definition goes into the header file of our NDI class.

We also need to inject custom shader code for the GPU to declare our buffers so that they can be referenced later on and used by our custom shader functions. To do it, we inform Niagara through the override of GetParameterDefinitionHLSL:

void UGaussianSplatNiagaraDataInterface::GetParameterDefinitionHLSL(
  const FNiagaraDataInterfaceGPUParamInfo& ParamInfo, FString& OutHLSL)
{
  Super::GetParameterDefinitionHLSL(ParamInfo, OutHLSL);


  OutHLSL.Appendf(TEXT("int %s%s;\n"), 
    *ParamInfo.DataInterfaceHLSLSymbol, *SplatsCountParamName);
  OutHLSL.Appendf(TEXT("float3 %s%s;\n"),
    *ParamInfo.DataInterfaceHLSLSymbol, *GlobalTintParamName);
  OutHLSL.Appendf(TEXT("Buffer<float4> %s%s;\n"),
    *ParamInfo.DataInterfaceHLSLSymbol, *PositionsBufferName);
  OutHLSL.Appendf(TEXT("Buffer<float4> %s%s;\n"),
    *ParamInfo.DataInterfaceHLSLSymbol, *ScalesBufferName);
  OutHLSL.Appendf(TEXT("Buffer<float4> %s%s;\n"),
    *ParamInfo.DataInterfaceHLSLSymbol, *OrientationsBufferName);
  OutHLSL.Appendf(TEXT("Buffer<float4> %s%s;\n"),
    *ParamInfo.DataInterfaceHLSLSymbol, *SHZeroCoeffsBufferName);

Effectively, this means that a Niagara system using our custom NDI will have this shader code generated under the hood. This allows us to reference these GPU buffers within our HLSL shader code for next steps. For convenience we defined the names of the parameters as FString, and they are used to make the code more maintainable.

Transfer Gaussian Splat data from CPU to GPU

Now the tricky part: we need to 鈥減opulate鈥 the GPU buffers using C++ code as a bridge between the CPU memory and the GPU memory, specifying how the data is transferred.

To do it, we decided to introduce a custom 鈥Niagara data interface proxy鈥 鈥 a data structure used as a 鈥bridgebetween the CPU and the GPU. This proxy helped us push our buffer data from the CPU side to the buffers declared as shader parameters for the GPU. To do it, we defined in the proxy the buffers, and the functions to initialize and update them respectively.

I know this seems to be getting very complicated, but from a logical point of view it is quite simple, and I can help you understand the system by visualizing the full concept in this diagram:

Now that we have a complete overview of our system, there are some final little details we need to refine in order for it to be fully operational.

We already have the buffers鈥 definitions for the GPU as HLSL code via the GetParameterDefinitionHLSL function. Now, we need to do the same for the functions we previously defined in GetFunctions, so the GPU understands how to translate them into HLSL shader code.

Let鈥檚 take the GetSplatPosition function for example, we previously saw how it was defined for use with the CPU. Now we need to extend its definition to be also declared for the GPU. We can do this by overriding the GetFunctionHLSL in our custom NDI:

bool UGaussianSplatNiagaraDataInterface::GetFunctionHLSL(
  const FNiagaraDataInterfaceGPUParamInfo& ParamInfo, const
  FNiagaraDataInterfaceGeneratedFunction& FunctionInfo, 
  int FunctionInstanceIndex, FString& OutHLSL)
{
   if(Super::GetFunctionHLSL(ParamInfo, FunctionInfo,
     FunctionInstanceIndex, OutHLSL))
  {
    // If the function is already defined on the Super class, do not
    // duplicate its definition.
    return true;
  }
  
  if(FunctionInfo.DefinitionName == *GetPositionFunctionName)
  {
    static const TCHAR *FormatBounds = TEXT(R"(
      void {FunctionName}(int Index, out float3 OutPosition)
      {
        OutPosition = {PositionsBuffer}[Index].xyz;
      }
    )");
    const TMap<FString, FStringFormatArg> ArgsBounds =
    {
     {TEXT("FunctionName"), FStringFormatArg(FunctionInfo.InstanceName)},
     {TEXT("PositionsBuffer"),
       FStringFormatArg(ParamInfo.DataInterfaceHLSLSymbol + 
         PositionsBufferName)},
    };
    OutHLSL += FString::Format(FormatBounds, ArgsBounds);
  }
  else
  {
    // Return false if the function name does not match any expected.
    return false;
  }
  return true;
}

As you can see, this part of the code simply adds to the OutHLSL string the HLSL shader code that implements our GetSplatPosition for the GPU. Whenever Niagara is GPU based and the GetSplatPosition function is called by the Niagara graph, this shader code on the GPU will be executed.

For brevity I did not include the other HLSL shader code for the scale, orientation, spherical harmonics, and opacity getter functions. However, the idea is the same, we would just add them inside GetFunctionHLSL.

Finally, the actual code to transfer data from the CPU to the GPU via the DIProxy is handled by the override of SetShaderParameters:

void UGaussianSplatNiagaraDataInterface::SetShaderParameters(
  const FNiagaraDataInterfaceSetShaderParametersContext& Context) const
{
  // Initializing the shader parameters to be the same reference of 
  //our buffers in the proxy
  FGaussianSplatShaderParameters* ShaderParameters =
    Context.GetParameterNestedStruct<FGaussianSplatShaderParameters>();
  if(ShaderParameters)
  {
    FNDIGaussianSplatProxy& DIProxy = 
      Context.GetProxy<FNDIGaussianSplatProxy>();


      if(!DIProxy.PositionsBuffer.Buffer.IsValid())
      {
        // Trigger buffers initialization
        DIProxy.InitializeBuffers(Splats.Num());
      }


      // Constants
      ShaderParameters->GlobalTint = DIProxy.GlobalTint;
      ShaderParameters->SplatsCount = DIProxy.SplatsCount;
      // Assign initialized buffers to shader parameters
      ShaderParameters->Positions = DIProxy.PositionsBuffer.SRV;
      ShaderParameters->Scales = DIProxy.ScalesBuffer.SRV;
      ShaderParameters->Orientations = DIProxy.OrientationsBuffer.SRV;
      ShaderParameters->SHZeroCoeffsAndOpacity =
        DIProxy.SHZeroCoeffsAndOpacityBuffer.SRV;
  }
}

Specifically, this transfers the buffer data from the NDI proxy (DIProxy) into the relative HLSL shader parameters, ruled by the FGaussianSplatShaderParameters struct.

That was a lot of code! If you managed to follow the full process, congratulations! You are now pretty much done with the low-level implementation. Let鈥檚 back up one level and finish some of the leftovers to complete our Gaussian Splat viewer!

Register our custom NDI and NDI proxy with Niagara

One last thing required to access our custom NDI inside the Niagara property types is registering it with the FNiagaraTypeRegistry. For convenience, we decided to do it inside the PostInitProperties of our NDI, where we also create the NDI proxy that will transmit data from the CPU to the GPU.

void UGaussianSplatNiagaraDataInterface::PostInitProperties()
{


  Super::PostInitProperties();


  // Create a proxy, which we will use to pass data between CPU and GPU
  // (required to support the GPU based Niagara system).
  Proxy = MakeUnique<FNDIGaussianSplatProxy>();
 
  if(HasAnyFlags(RF_ClassDefaultObject))
  {
    ENiagaraTypeRegistryFlags DIFlags =
      ENiagaraTypeRegistryFlags::AllowAnyVariable |
      ENiagaraTypeRegistryFlags::AllowParameter;


    FNiagaraTypeRegistry::Register(FNiagaraTypeDefinition(GetClass()), DIFlags);
  }


  MarkRenderDataDirty();
}

Here is a screenshot of our updated shiny Niagara system making use of our custom NDI and getter functions exposed in its graph!

The big challenge of converting from PLY to Unreal coordinates

There is hardly any documentation currently available online to explicitly specify the conversions required to transform data coming from a PLY file into Unreal Engine. 

Here are some funny painful failures we had to go through before finding the right conversions.

image6.png
image5.png
image2.png
image9.png
image4.png

After many trials and mathematical calculations, we were finally able to establish the proper conversion. For your convenience, here is the list of operations to do it:

Position (x, y, z) from PLY 
Position in UE = (x, -z, -y) * 100.0f

Scale (x, y, z) from PLY
Scale in UE = (1/1+exp(-x), 1/1+exp(-y), 1/1+exp(-z)) * 100.0f

Orientation (w, x, y, z) from PLY
Orientation in UE = normalized(x, y, z, w)

Opacity (x) from PLY
Opacity in UE = 1 / 1 + exp(-x)

In order to keep performance optimal, these conversions are performed on load rather than at runtime, so that once the splats are in the scene, no update is required per frame.

Here is how the resulting Gaussian Splats viewer will show by following the right calculations at the end of the process I described in this article.

There are some more bits and bobs of code to deal with further geometric transformations and clipping, but those remain outside of the scope of this article.

The final result with some more feedback

This has been a very long journey, resulting in a very long article I admit. But I hope it has inspired you to better understand how Niagara in Unreal can be customized to interpret your custom data; how it is possible to optimize its performance via GPU-based HLSL shader code injected from your custom Niagara Data Interface and Niagara Data Interface Proxy; and finally how Gaussian Splat can be viewed in the viewport after all this hard work!

Thank you for following this journey and feel free to and on LinkedIn for more tech-based posts in the future!

Happy coding! 馃檪

]]>
How we wrote a GPU-based Gaussian Splats viewer in Unreal with Niagara
Meet the Magnopians: Chris Kinch草莓视频在线Tue, 30 Apr 2024 09:45:44 +0000/blog/meet-the-magnopians-chris-kinch618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:662fbf489fe629730e175bc5Chris Kinch is a Psychology Lecturer turned 3D Artist, using his work ethic from academia to teach himself 3D art for games during the COVID pandemic. He is a Junior Artist at 草莓视频在线, and has recently celebrated his 1 year anniversary at the company. We caught up with him to discover what attracted him to 草莓视频在线 in the first place, and hear more about his pivot from Psychology to Art. 


Tell us more about your role at 草莓视频在线

I鈥檓 a Junior Artist, so my day-to-day is mostly spent modelling and texturing assets for our current project with direction from our Senior Artists. I'd describe my current role as 'generalist' 鈥 we're a fairly small team so it's helpful to be able to move around based on the project's needs, whether that's environment work, props, set dressing, level creation, etc.

You鈥檝e been at 草莓视频在线 for a year now, but what attracted you to 草莓视频在线 in the first place?

To be 100% honest, it was my mentor at the time who recommended I apply for the position! The name 草莓视频在线 wasn't on my radar, but it turned out that 草莓视频在线 had worked on the first VR game I ever played 鈥 Mission: ISS.

Looking back, it was the recruitment process that made 草莓视频在线 stand out against other companies I was interviewing for. The art test set for me was a bespoke task based on feedback that the Art Director at the time had given on one of my portfolio pieces. It's rare to get feedback even at the end of an interview, so right from the start I got the impression that 草莓视频在线 was a company that invested in its artists. 

Mission: ISS lets users explore the International Space Station in detail and understand what it鈥檚 like to be an astronaut in a way that鈥檚 never before been possible.

What made you decide to pursue a career in this field?

Before 3D, I was part way through a PhD in Psychology but that all came to a stop during the COVID lockdowns. I've always been interested in art as a hobby, so with a lot of time suddenly on my hands, and very much on a whim, I took a stab at some 3D tutorials on YouTube. I can't say for sure what pulled me in, but I was totally hooked! It's terribly clich茅, but after some time I knew I wanted to do this as a career. After some emotional discussions, I withdrew from my PhD and put my best foot forward!

What skills are essential for anyone in your role?

More virtue than skill, but I'd say humility is really important. We can put so much effort into the work we do that sometimes it can be difficult to remove ourselves from the process of receiving feedback and iterating. Similarly, I think it's important to learn not to be precious about our work 鈥 sometimes the best approach is to start from scratch even though that can be really difficult!

I also think problem-solving is sometimes overlooked when you're getting started in 3D. It can be a bit more technical than other art disciplines, so not being discouraged when things don't work as expected and learning to troubleshoot and research solutions are important. It's kind of inseparable from the discipline, so if you can learn to enjoy it (almost) as much as the creative parts, you'll have far fewer headaches!

What鈥檚 the best piece of advice you鈥檝e ever been given?

Your 100% effort doesn't always look the same. Sometimes, you might only have 50% left in the tank - if you give that 50%, you are giving 100% of what you have in that moment. 

If you could have any other job in the world, what would it be?

Tough one. Maybe a carpenter? I feel like I'd still want to make things. 

Where would you most like to travel to in the world?

I'd love to go back to Japan. I spent a week there and it wasn't nearly enough time!

How do you want to leave a mark on the world 鈥 personally or professionally?

When I was starting out learning 3D, I relied a lot on the generosity of other artists who created free instructional content or volunteered their time to give advice (to be honest, I still do!). One day, I'd also like to be in a position where I can provide similar help to other new artists.  

What are you reading/listening to/watching right now?

Right now I'm watching The Rookie 鈥 that Cop Cuties song went viral on TikTok and has had me in a chokehold ever since. I'm also finishing up rereading The Witcher books!

]]>
Meet the Magnopians: Chris Kinch
Ben Grossmann speaks to fxguide about how LED volumes were used as a storytelling device in 鈥楩allout鈥.ExternalWed, 17 Apr 2024 09:07:00 +0000https://www.fxguide.com/fxfeatured/inside-the-led-bunker-of-fallout/618131cf8cd8e779321e9666:6256e7c502af7b62119329a0:66e946d2bcf7e65482b7e0b6

explores the use of LED volumes in the production of Amazon Prime's Fallout. It details how physical and digital production methods combined to create immersive environments, utilizing advanced Unreal Engine technology for in-camera visual effects (ICVFX). The collaboration between the virtual art department and production teams streamlined workflows, enhancing creativity and efficiency. The LED volumes enabled seamless integration of real-time 3D assets with practical sets, reducing the need for post-production VFX replacement and capturing final pixels directly on set.

Permalink

]]>Ben Grossmann speaks to fxguide about how LED volumes were used as a storytelling device in 鈥楩allout鈥.