UOW Makerspace | 360° Introductory Videos

Concept

Upon volunteering with the UOW Makerspace team at the start of the session, a few things quickly became apparent.

  1. The Makerspace is very popular among STEM students.
  2. The Makerspace is not very popular outside STEM students.
  3. Many people entering the Makerspace have no idea how to use the tech.

As such, I initially conceptualised filming half a dozen videos that would hypothetically replace a large part of the inductions that students are currently required to undertake for many of the tools inside the facility. Moreover, I decided I could film using equipment from the Makerspace itself; the Samsung Gear 360 camera being a nice showcase of the technology available within. I stumbled upon research by YouTube which suggested that 360 video motivates viewers to ‘get in on the action’ by moving about and controlling their perspective, and it ‘also makes them want to share’ the experience which sounded like a great opportunity for the Makerspace to gain some word-of-mouth. This choice also segued nicely into featuring another technology that the Makerspace has in spades: VR headsets. Virtual Reality headsets greatly increase the value of 360 video with their ability to immediately respond to your turning movements. This heightens the sense of immersion more than if it were simply watched on Facebook or YouTube on a flat screen.

Another benefit that I had envisioned, was if filmed from a first-person perspective, this might work exceptionally well for teaching the viewer exactly how to operate the technology. Eventually, my concept changed to using 360 videos as an introductory device for equipment that required inductions to use. As approximately one-minute videos, they would instead show students the basics of the technology in question, and then prompt the viewer to sign-up for an induction if the video interested them. I will touch more on why this changed later under ‘trajectory’.

Methodology

The first step in my methodology was embracing the F.I.S.T approach by simply grabbing the 360 camera and seeing what worked and what didn’t work. For example, I quickly filmed some footage to understand what my workflow was going to be like. After doing a grandiose sweep of the ‘space with the camera, I installed the software bundled with the camera on a computer of choice, and fumbled with it until I understood how to proceed. One indispensable point of reference for this entire project, particularly in the formative, experimental stages, was this video tutorial by ‘Geoff’ at geoffmobile.com. That resource broke-down the vital technical minutia including the ‘stitching’ process with the proprietary software, as well as what settings to fiddle with in Premier Pro afterwards such as the ‘offset’ function and final export settings.

After filming some footage, I would ‘stitch’ it either by using the PC software or by Bluetooth’ing it to the Makerspace’s own Samsung Galaxy S7 directly via the Gear 360 itself. This was a suitable alternative if the footage was less than approximately 50 megabytes in size. Larger than that, and it was quicker to go through the PC software. The next step was viewing the test footage on the Gear VR. Once copied to the phone, it was possible to see how the end-product would look in virtual reality, and so, if that was a valid technique for shooting. Then, it was often a matter of taking advantage of opportunities where someone was using the technology that I desired to film and asking if I could film their work. Other times, such as during the Open Day on 19th August, I aimed to take advantage of the increased foot traffic and get as much footage as possible for the video on VR, with mixed results.

uowAlumnireaching4stars

UOW Alumni ‘Robert’ reaching for the stars in ‘Mission: ISS’ 

After checking the footage was useable on the PC with the stitching software, I had to change workstations to use Premier Pro on the Makerspace Macs (I didn’t put much consideration into which computer I installed the single-licence program on). I previously had no experience with Premiere Pro, so basic YouTube tutorials such as this one helped me understand how to do simple things like fade-ins, add text titles to the start of videos, and managing different audio levels. As I was cobbling together my footage in Premier, I noticed that some video clips had considerable ‘shaking’ that wasn’t immediately obvious. This user-guide helped me to stabilize clips such as the time-lapse in the carving machine which would have otherwise been unusable. I would start thinking about what my voice over should be saying over the video. I would then consult the most knowledgeable Makerspace official on that technology, be it Nathan the coordinator, or casual staff members Tom and Michael. It would be amiss of me to not sincerely thank, and credit, these three individuals for the completion of these videos. They contributed not only to the spoken component of the videos, but were also there to suggest solutions and alternatives when something didn’t work out. As such, they were another crucial source in assembling this project. After the commentary was recorded, using a quality ‘Rode Podcaster’ microphone from the Makerspace, naturally, I sourced background music for the videos using ‘freemusicarchive.org’. This website provided me with almost too much choice; it was often a time-consuming process to merely wade through the thousands of applicable tracks.

The final videos would represent an exercise in compromise, as I fought to bend both the edited footage and the spoken commentary into a cohesive whole. This was the result of my indecision on whether to write the ‘script’ first, or film and edit a smooth-flowing video, something I will need to choose a direction on going forward. Having edited the video in Premiere Pro as ‘equirectangular’ footage, effectively a ‘flat’ image like typical maps of the Earth, this would need to be corrected.

vrequi

 Equirectanguar projection; note the bulbous distortion effect

While some video players such as ‘Fulldrive VR’ on the Play Store can wrap footage in real-time, YouTube, Facebook and other mobile VR platforms would need the final exported video to be transformed back into 360. This necessitated using a tool called the Spatial Media Metadata Injector. This script wraps the exported ‘flat’ video file around a digital sphere, resulting in an ‘injected’ version, effectively bring the process to a conclusion. I then uploaded the completed video as ‘unlisted’ to YouTube, and sent a link to the media team who could watch it and make sure there’s nothing that needs changing. Assuming there were no problems, I’d ask the team to distribute them on the social media accounts at regular intervals to space them out.

Utility

With the Makerspace, and the Makerspace Club by extension, being new features on the UOW landscape, any content that can be disseminated and associated with them would be of value to that community. However, as I touched on earlier, I believe the project has three main utilities.

 Namely:

  1. Giving the Makerspace visibility beyond STEM students (much of the tech is perfect for media and creative arts applications).
  2. Introducing students to technologies that they aren’t familiar with; encouraging further engagement through inductions.
  3. A showpiece for myself that demonstrates my ability to work with cutting-edge technologies, people, video and audio.
GJjeans

See: ‘irrelevent pet-project’

A secondary benefit of the project has been my own initiation with, and discovery ofinterest in, machinery that I otherwise wouldn’t have given the time of day. Having conquered the VR and 3D printing arenas, I turned my attention to the automatic embroidery machine for another video. It didn’t take me long to fully appreciate the utility of the device, even if only for my own irrelevant pet-projects.

 

 

Trajectory

The initial concept of filming the Makerspace inductions themselves had to change for a few reasons. Primarily, formal inductions require signed paperwork by the participant indicating that they acknowledge any risks involved, and they’ve been supervised by an assessor. With a video that one watches alone, there’s no way to verify the viewer actually understands the information. The second reason was that filming in first-person at length, while also maintaining a quality and informative presentation, would be simply too difficult. I did look into the possibility; this blog post details how to make a DIY Hat Mount for your Gear 360 that’s ‘not entirely nauseating’ didn’t leave me convinced. The reality of enormous file sizes would pose yet another issue. The final, and perhaps most practical reason for the change to short, introductory videos was that nobody wants to have a relatively low-resolution screen plastered to their face for 45 minutes.

Before settling on the Makerspace’s own Gear 360 camera, this source motivated me to consider all the options available to me first. I spoke with AV Services in building 20, AV Services in building 23, IMTS in the library and even the T&T Hub find the ideal equipment before starting the project. With the suggestion by Makerspace co-ordinator Nathan that I could make use of the other 360 camera available to him, that sealed the deal. In the end, I had to make-do with the single Gear 360 camera.

With the Makerspace being closed on weekends and after 6pm, I still needed to work on my videos regardless. With the single-use software that came with the Gear 360 being installed on a Makerspace PC, and the accompanying Galaxy S7 that would otherwise be used to stitch footage tucked away, I often had to wait until I could return to the Makerspace. That was until I came across this Reddit thread which detailed how to obtain a 30-day trial with some finagling. This allowed me to complete the time-consuming process of stitching at home, in the background. The next obstacle was Premiere Pro, the trial of which only lasted a week and ran awfully on my respectable PC. The Preview video I made for my beta presentation took hours to assemble and render, inexplicably.

This meant that the final stretch of editing videos occurred in a flurry of different circumstances. Initially with the use of Makerspace Macs, library Macs on the weekend with my own laptop for recording commentary with Audacity in quiet locations, and my girlfriend’s laptop to continue editing on Premiere during the library’s closing periods. The Gear VR being inaccessible also prompted us to swap phones during the period, so I could test my exported videos on her Google Pixel + Daydream VR configuration instead.

The push to get all 5 videos done for the machinery that requires inductions has meant that I have had to put a pin in the video on the Virtual Reality headsets for the moment. With all the footage ready to go, I look forward to completing it is as a fun addition to the main 5 videos in the next couple days and incorporating 360 footage of VR games.

Successes

This has been one of the most rewarding, motivating and challenging projects I have undertaken in years. Each video posed a different problem that I relished in solving.

  • How do you effectively make use of 360 filming to showcase examples of what you can make with a technology?
  • How do you record virtual reality footage and present it as 360 video?
  • How do you capture an hour-long manufacturing process inside a one-minute video?
  • How do you affix a 360 camera to the inside of a laser cutter or carving machine?
IMG_20171004_130241

With a custom, 3D modeled and printed mount made from scratch. That’s how.

These were all stimulating challenges that I genuinely loved encountering and overcoming. Beyond mere enjoyment, I also managed to complete a video for every Makerspace technology that requires an induction to use, which was my goal.

Limitations

Despite the great personal satisfaction gained, and having accomplished what I set out to do, it certainly wasn’t easy or a smooth process.

  • 360 + VR required much more time to edit than traditional video; had to ensure it looked fine after editing.
  • Reality of project meant there was a lot of back-and-forth between not only editing stages, but the uni and my home to record, stitching and edit. This constantly presented issues of compatibility, different licencing affordances (fonts, plug-ins).
  • The Samsung Gear 360 2016 model camera would frequently overheat and shut off at critical times; a well-known problem of that model. Would often die at critical times, i.e. just as embroidery machine started to sew.
  • The final videos look crisp and high-resolution in 4K on a flat screen, but when viewed in VR (as intended) they look a bit blurry. Perhaps the immersion factor makes up this this shortcoming; I’ll let the audience decide.

I’d like to finish by giving special thanks to my friends Elysse Turner, for encouraging me, lending her equipment and professional expertise, and Cường Lâm, who assisted me in filming and could always be counted on to provide excellent feedback and suggestions.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References:

 

As hyperlinked in the body:

 

  1. Is 360 Video Worth It?, https://www.thinkwithgoogle.com/advertising-channels/video/360-video-advertising/
  2. How to Edit Samsung Gear 360 video in Adobe Premiere, https://www.youtube.com/watch?v=wO7FNAnqkH0
  3. Adobe Premiere Pro for Absolute Beginners, https://www.youtube.com/watch?v=aMeHRRWNGgA
  4. Stabilize motion with the Warp Stabilizer effect, https://helpx.adobe.com/au/premiere-pro/using/stabilize-motion-warp-stabilizer-effect.html
  5. Free Music Archive, http://freemusicarchive.org/
  6. Fulldive VR, https://play.google.com/store/apps/details?id=in.fulldive.shell&hl=en
  7. Spatial Media Metadata Injector, https://github.com/google/spatial-media/releases
  8. Samsung Gear 360 Camera – DIY Hat Mount, http://www.rhizomelabs.com/single-post/2016/06/22/Samsung-Gear-360-Camera-DIY-Hat-Mount
  9. 4K 360 camera comparison, http://360rumors.com/2016/09/4k-360-camera-comparison-nikon.html
  10. Action Director missing serial work around, https://www.reddit.com/r/Gear360/comments/5mkvy0/action_director_missing_serial_work_around/
  11. 360 Capture SDK, https://github.com/facebook/360-Capture-SDK
  12. Potential Gear 360 Major Flaw: Overheating, https://www.reddit.com/r/Gear360/comments/4k6ixd/potential_gear_360_major_flaw_overheating/

 

 

 

Advertisements

The Asian Stories – Peer Review

Concept

Lam’s video series ‘The Asian Stories’ is an on-going project consisting of short ‘Vines’ that he has developed since undertaking ‘Convergent Media Practices’ in April 2016. Broadly speaking, the ‘Vines’ chronicle the experiences of an ‘Asian guy’ in a Western community. The initial iteration of the series that Lam refers to as ‘first season’ was satirically engaging with and presenting, with a level of absurdity, numerous perceived Asian stereotypes which I will detail in shortly under ‘trajectory’. Currently, the concept for the series is the exploration of possible similarities that exist between the cultural divide of West and East, which Lam perceives as helping to ‘bridging the gap’. The episodes in this third season shows the ‘Asian’ caricature learning to adapt his behaviour and mannerisms to his ‘new environment’, as Lam put it in his Beta presentation. For example, Episode #16 ‘Greeting Teacher’ pictures ‘the first days’ of the character reacting to a new teacher entering the classroom. He initially acts very rigidly and respectfully, standing up-right when first encountering a new teacher as if he might in his homeland. His fellow students, and the teacher, react with bewilderment. Some days later the character is in class, and when the teacher arrives, he has fully embraced the informal ‘Western’ greeting to a comedic extent.

Trajectory

Lam’s first season was, by his own admission, a somewhat simplistic but effective exploration, and sometimes subversion, of Asian stereotypes. In hindsight, he regrets not preparing for those early episodes as they relied too heavily on improvisation. The second season started to move away from that stereptype formula, and focused on differences that Asian students sometimes encounter at university. While this material still made fun of these situations, it began to explore some interesting cultural  variances. With his third season, Lam has embraced a more comparative, critical process by juxtaposing ‘Asian’ culture and against ‘Western’ culture and experiences that Lam has personally had in Australia, ranging from differing tastes in food, hobbies, and language. It also shows the character ‘meeting half way’ with his peers, both adapting their cultural attitudes to connect better. Mirroring Lam’s experiences, this third season represents a hybridisation where the character really has taken-on some Western characteristics. Some other changes will be noted in the feedback section.

Methodology

Lam tells me that the main reason for choosing the ‘Vine’ style medium is multifaceted. Firstly, Vines have a reputation for being short, light-heartened, humorous and fun creations that shouldn’t be taken too seriously. He hopes that this will help in preventing any complaints that the videos are in poor taste or unrepresentative of whoever they are. Culture is, after all, personal and can be sensitive, which Lam fully acknowledges and does his best to respect. Secondly, Vines are very embracing of F.I.S.T sensibilities. There’s the expectation that they are filmed fast, inexpensively with just a phone’s camera, with simple editing involved, and are individually tiny videos. Aware of his limitations in scope and time, Lam chooses ideas that are ‘typical’ and easy to film. ‘Day-in-the-life’ being ideal; nothing too extraordinary. Then, he tries to visualise scenes and write the script, while organising with the actors, most frequently Mitchell Trench, the best time to meet. Shooting often takes several hours, as his script isn’t necessarily ‘final’. Improvisation and spontaneity often adds much to the videos. Editing is a simple matter of applying his ‘package’ of assets (e.g. The now iconic goat sfx, simple font). The final step is uploading his video to Facebook, where he can reach his large friend/audience base which people can easily tag their friends and share the ‘Vine’.

Utility

Lam states that the dominant theme across each season of his videos is the introduction and representation of elements of his culture to his Australian peers. Perhaps more importantly however, it enables him to show friends back home what his day-to-day life is like in a completely different country, while given them a sense of what it is like to be immersed in a culture other than your own. Additionally, it allows him to tell his stories in a more ‘engaging’ and amusing manner than simple messaging. For his friends that also happen to live in another country such as Australia, Lam beliefs that it offers them a form of mutual understanding in seeing that other Asian students encounter the same obstacles. Perhaps this recognition offers some form of advice in overcoming cultural barriers.

Feedback

Lam’s has embraced a range of feedback so far on his series. For example, during his second season Lam began to add Vietnamese subtitles to each video, since many friends and family, his dad included, don’t speak English and so couldn’t enjoy the videos. Another development, ensuring each video has a customised thumbnail to grab the viewer. While Lam doesn’t do much in the way of previewing his work to get feedback, he gets plenty once he uploads an episode. He admits that while he hasn’t iterated a whole lot during season, instead focusing on consistency, he has new ideas waiting to deploy for the fourth season. For example, he has heard interest in the introduction of new sound-effects and episode ideas. Lam specifically thanks Briana Wallace who gave him a swathe of concepts including his most recent episode on anime. Thematically, Lam is considering taking the next logical step, evolving the ‘Asian’ character to depict him enthusiastically attempting Western/Australian experiences with, I’m sure, a variety of results. Some suggestions that I’ve presented based on the Beta presentation, and future include:

  • Possibly introducing another recurring character, or an additional cast member.
  • Playing with formula, setting and background. ‘One-off’ experimental episodes!
  • Investigate if there’s any quirks that ‘South Asian’ students, such as Indians and Pakistanis might encounter.

For improving the final submission:

  • Uploading a compilation of all three seasons onto YouTube as an approx. 5-minute video each.
  • Getting in touch with other Asian friends to contribute additional translations for YouTube’s caption system. Korean, Chinese, Japanese?
  • Feature widescreen-style black bars above and below footage to put the translated subtitles on for better readability.
  • Reaching out to external Facebook pages which might enjoy promoting his videos to their audience, don’t limit yourself to own Facebook/YouTube! Vine’s meant to be shared!
  • Better document/show examples of feedback received.

‘Don’t Fall for The Paywall’ Social Media Campaign (Planned)

A mock-up campaign designed by Xara Alice and I to encourage consumers to look out for exploitative game-design practices, particularly in the mobile gaming sphere, which preys on unwitting audiences such as children. We created a distributable flowchart image (Don’t Fall for The Paywall) that would allow someone to easily identify whether a game they’re playing might have the potential to exploit younger audiences.

Jim Styer, president of Common Sense Media stated of ‘Smurf Village’ that it and similar games are ‘deceptively cheap’ and called for parents to become aware of the ‘promotion of games and their delivery mechanism’ to avoid their kids racking up thousands in bills. This was the impetus for our campaign which we decided not to pursue.

22139975_10212727433798311_501124909_o

Created by Clancy Carr & Xara Alice

 

Nintendo’s fight against participatory culture and remediatory practices

Today’s participatory media culture has thoroughly established itself as the new landscape that Henry Jenkins defined as ‘any kind of cultural production which starts at the grassroots level and which is open to broad participation’ (2010) and the affordance granted by new technology enables ‘average consumers to archive, annotate, appropriate and recirculate media content’ (2006, p. 1).

Since then, the widespread adoption of smart devices and the near universal embrace of popular culture by web 2.0 online communities have facilitated an unprecedented, revolutionary change in the way fans engage with intellectual property ‘that is anything but fringe or underground today’ (Jenkins, 2006, p. 2). This engagement transcends the mere appropriation of material, extending to the production and submission of remixed content; better defined as the remediation of media.

Video games as a medium are uniquely impacted by this paradigm shift due to the popularity of YouTube having encouraged once passive players to remix game content in various unanticipated ways, producing legally-grey derivative works. These range from machinima, speed-runs and the ‘Let’s Play’ phenomenon, to an increased interest in fan created work like mods and additional levels. This essay will argue that Nintendo’s deliberate censuring of participatory culture by means of corporate policy negatively impacts both its reputation among fans and subsequently the brands’ exposure.


Nintendo’s ‘ever-evolving but still archaic perception of fan content’ (Greenwald 2015) can be evaluated starting from their early support of the controversial ‘Content ID’ system that YouTube expanded upon in 2013. This caused it to ‘scan more channels, including those affiliated with a multi-channel network’ referring to channels part of a company like Machinima (Kain 2013).

By YouTube’s own description, Content ID acts to ‘identify and manage their [copyright owners] content on YouTube’ (2013). This automated system operates by scanning footage being uploaded ‘against a database of files that have been submitted… by content owners’ (YouTube 2013) who then decide themselves what to do with the film identified as copyrighted material.

If you are the copyright holder, there are a range of options available to you such as muting offending music, replacing the video’s current advertisements with your own, claiming ad revenue for yourself, issuing the producer a copyright ‘strike’ or withholding advertising on that video for up to 30 days during the period it’ll make most of its Google AdSense money (McArthur 2014).

In February of 2013, many publishers took to the internet to absolve themselves from the unintentional copyright notices, advising users to ‘let us know if you’ve had videos flagged today’ and ‘contest them so we can quickly approve them’ while they worked on longer-term resolutions to the issue. Some, such as Paradox Interactive, took a step further by issuing a letter giving permission to all third-parties on YouTube to use their IP’s for remediatory purposes (Tassi 2013).

The clear problem was that ‘a machine can’t determine fair use’ accurately (Bailey 2013), which resulted in unintentional matches and pre-empted much of the ire YouTube would see directed at it even to this day. Nintendo, however, began to claim 100% of ad based revenue for themselves on all YouTube videos showcasing its games and replaced previous ads with their own (Ligman 2013). This upset YouTube producers who relied on the platform’s income as their main means of supporting themselves. Some, understandably. expressed their disdain by vowing to never cover a Nintendo produced game again (Cushing 2013).

Zack Scott, the owner of a channel of over 2 million subscribers, later argued that ‘video games aren’t like movies or TV… each play-through is a unique audio-visual experience’ (2013) which is inimitably different before himself ceasing to produce Nintendo oriented videos. This policy was confirmed by Nintendo shortly after, stating ‘videos featuring Nintendo-owned content’ would feature adverts ‘at the beginning, next to or at the end of the clips’. In addition, they noted ‘…unlike other entertainment companies, we have chosen not to block people using our intellectual property’ (Cushing 2013).

By and large however, video-game companies specifically, as aforementioned, were not even claiming ad revenue let alone blocking videos produced with their IP.

While Nintendo does possess ‘the dubious legal right to go after people who monetize footage of their video games’ (Greenwald 2015) it’s a course of action that is damaging to all parties involved. The people who produce and watch these videos are fans of Nintendo’s creations, diminishing their ability to express that admiration in the form of ‘Let’s Play’s’ in particular denies Nintendo free and esteemed marketing from communally respected individuals whose opinion holds undeniable weight.

As a somewhat predictable outcome of this reduced exposure and condemnation of the targeted audience, sales for their software experienced a slight dip during the following weeks (VGChartz 2013). On the 24th of June however, Zack Scott again began releasing videos based on Nintendo games, due to the copyright claims having stopped appearing in his inbox and his ad earnings returning from the 23rd of May (Totilo 2013).

On the 28th of January 2015, Nintendo unveiled the ‘Nintendo Creator’s Program’ which served as a compromise between Nintendo and YouTuber’s that allowed creators to make money off their Nintendo-related videos. Participants could choose to either individually register their Nintendo derivative videos to receive a 60% cut of ad proceeds, or register their entire channel to earn slightly more; 70% (Hernandez 2015). This could be ‘changed arbitrarily’ according to the terms and conditions, and in addition, the individual submission and authentication of videos can ‘regularly take up to three business days’.

Of note, this would guarantee that Nintendo receives at least a sizable portion of revenue from the expected material like Let’s Plays and machinima films, but also reviews. Hernandez foresees a realistic situation, in which a YouTuber receives review code of a game early, rushes to play, record, edit and produce a review only for it to be irrelevant to consumers who’ve already made their purchasing decision based on other sources. This system does not pose any advantage to either the YouTuber or the consumer; only Nintendo.

Another concern that prompted the program’s revision was the expectation that producers to register their whole channel and Nintendo’s ability to unlawfully force their ads onto unrelated content. Prominent YouTuber Felix  ‘PewDiePie’ Kjellberg  lamented that ‘the people who have helped and showed passion for Nintendo’s community are the ones left in the dirt the most’ (Kjellberg  2015), with others expressing concern at the need to sign a contract to review a game, allowing for the possibility of a video deemed ‘inappropriate’ by Nintendo being refused publication.

In February, Nintendo attempted to remedy some of the more objectionable parts of their program, having seemingly realised they couldn’t profit off video’s they own no licence to. The update told subscribers that if their channel does contain videos of games not on their ‘list of supported games’, then they need to ‘remove those videos from your channel within two weeks of the submission date’ (Tassi 2015).

To make matters worse, this list did not include some of Nintendo’s biggest IPs such as Pokemon, Super Smash Bros. and a licence they had just acquired, Bayonetta. At the time of writing, Nintendo Creator’s Program only accepts applications from users in the United States, Japan, Canada, Brazil, Mexico, Panama, Chile, Peru and Argentina.

Remarkably, this prevents players in Oceanic countries such as Australia and one of Nintendo’s biggest markets, the entire European continent, from legally producing marketable videos with even a marginal amount of Nintendo game footage (Nintendo 2015).

To illustrate Nintendo’s relationship with YouTube and remediatory content in a nutshell, the Wii U version of Minecraft (2015) recently received a free, licenced downloadable content pack from an outsourced developer 4J Studios that includes 40 Mario-related character and contains other miscellaneous items. The studio began receiving messages from fans complaining that the DLC, a collaboration between original developer Mojang and Nintendo, was getting their Minecraft videos reported despite 4J Studios being ‘assured this wouldn’t happen’ directly from Nintendo (Klepek 2016).

This egregious policy comes at an unfortunate time for Nintendo. Despite having just launched titles in the aforementioned game series’, the Wii U became the company’s slowest selling console to date (Tassi 2015).

Evidently, consumers were frustrated by these bewildering, bordering on anti-consumer decisions that are core examples of Nintendo’s stubborn corporate culture and reluctance to adapt to the changing media landscape that the consumers have much more of a say in. One of Nintendo’s latest tiles, Mario Maker (2015) allowed users to create and share their own Mario themed levels online with other players. Reminiscent of their difficulty working with YouTubers, many level creators found their content had simply vanished.

The game’s terms of service state that once an uploaded level has been deleted once, it cannot be re-uploaded again. Prominent full-time streamer David ‘GrandPOObear’ Hunt discovered that his entire catalogue of Mario Maker levels and ‘stars’, the in-game equivalent of a recommendation, had been wiped from his profile without any feedback on the cause and no solution offered by a customer service representative (Klepek 2016). Hunt worries that his out-spoken views on some of Nintendo’s policies, like that of their Creator’s Program, are the cause for what he understandably interprets as a form of punishment.

Others have taken to Nintendo’s support line only to be told there’s no evidence the stage violated any of the terms of service rules. The only stipulation the game indicates, is that ‘after a fixed period of time, courses with low popularity will be automatically deleted from the server’ however this seemingly applies to users levels at random without any offered feedback, explanation or indication of what defines ‘popularity’. Even more questionably, this sometimes affects entire accounts like Hunt’s (Klepek 2016).

In April, Nintendo added an article to its support directory in an attempt to answer these questions. It states that levels featuring low stars or plays, documented bug, titles that request stars from other users or any inappropriate content will be removed (Nintendo 2016). Without confirmation or an outline of what ‘inappropriate content’ entails specifically, it could be argued that Hunt’s username is what caused his hundreds of hours of Mario Maker levels to be deleted despite the customer service rep stating that it shouldn’t have been a problem (Klepek 2016). The representative called him back a few days later to inform him that his levels would be returned, but that never came to pass (Klepek 2016).

 

figure 1

Figure 1 ‘Nintendo’ on Google Trends

 

Based on a customized Google Trend’s analysis that focuses on the popularity of keywords ‘Nintendo’ (red), ‘Mario’ (Yellow) and ‘Zelda’ (Blue) on the YouTube platform from January 2012 to January 2016, it’s possible to speculate about some specific spikes and dips by making informed observations concerning Nintendo’s interactions with the YouTube network.

There exists a somewhat turbulent level of interest before January 2013, particularly around June and July 2012. Based on these findings (calculated relatively by comparing the graphed period against the all-time highest search ratio for that term), this period of interest can be attributed to fan’s searches for news coming from Nintendo’s first English broadcasted ‘Nintendo Direct’ online stream (Nintendo 2012).

These presentations are produced to coincide with the annual ‘E3’ (Electronic Entertainment Expo) and serve to deliver information about upcoming hardware and software, so it’s not unexpected to observe regularly increased traffic during the June and July period.

After the company’s original crackdown on videos relating to their intellectual property in January 2013 however, there’s an expected pique in interest.  After this initial spike, there’s an undeniably general slump in interest aside from the usual E3 period of June and July as there’s less videos with Nintendo material in them being uploaded in spite of the lacklustre Creator’s Program.

The company’s struggles with the YouTube community is just one aspect of a series of contentious practices that are unbecoming of a business built atop years of good-will from many devoted, long-time fans. Some of these fans adapt Nintendo’s games with mods in a show of ‘communal art-form, one contrasting with the commercial culture from which it is derived’ as Jenkins argues (1992 p. 249). This practice is an expression of passion for the source material and a desire to build on it and share the adaptation with others who’re equally appreciative.

Nintendo rejects the notion that modders aren’t trying to ‘steal profits from copyright owners’, Jenkins defends the remediatory dynamic, elaborating that the intent is to ‘express ideas, create dialog, and contribute to a culture’ that builds on the published product (1992, p. 249).

In ‘Convergence? I Diverge’, Jenkins states that modding can only ‘extend the game’s commercial life’ and help foster a creative community (2001, p. 67). While unobjectionable rip-offs and unaltered ROMs are harder to justify, the court case of Lewis Galoob Toys, Inc. v. Nintendo of America, Inc. ruled that mods fundamentally altering the intended purpose of a game have ‘the potential to improve the market for the original by adding variety to it’ and so are covered under fair use (Tushnet, 1997, p. 670).

The machinima scene is similar to the modding community in that it’s a practice involving ‘the merging of the commercial and the contemporary’ (Picard 2006) spheres where retail products sold to once passive consumers are able to remix the content within and reproduce remediated material from that. This adaption of gameplay to the cinematic form represents the participatory paradigm perhaps even more thoroughly than either Let’s Play’s or the modding community does, as it signifies a ‘metamorphosis of the player into a performer’ (Lowood, 2005, p. 8) which is beyond the archaic perception of the user being a passive consumer.

While not a court ruling, scholars generally agree that machinima as a form of remediatory behavior ‘contributes to overall innovation and diversity in the industry’ in much the same way modding does (Kerr, 2006, p. 73).

Despite this support from the academic community and a precent, Nintendo has generally treated both mods, machinima and fan films in the same manner as infringing YouTube content by sending cease and desist notices (Matulef 2016); be they manual DMCA filing or Content ID.


Nintendo’s abrasive approach to handling supposedly copyright infringing videos on YouTube reflects a fundamentally out-dated perception of the current media consuming climate. The entertainment industry now operates in a collaborative, omni-directional process where the consumers also take the role of producers, critics and ‘remixers’ who function to promote discussion while enriching and expanding the core game experiences by producing derivative works under the protection of the fair use act.

Nintendo’s exploitation of the Content ID system effectively shakes-down fans whose primary income is potentially their YouTube revenue while also leaving fans from select countries without any lawful ability to produce videos, and therefore without any income. As a consequence, larger channels write-off the company’s games and produce less videos about them resulting in a steady worldwide decline in interest for Nintendo and their products on YouTube as observable in the Google Trends diagram.

This means that long-time, video-producing fans are left disenfranchised due to their treatment, the channel’s followers and prospective buyers uninterested, and younger, future consumers unexposed to the Nintendo’s entertainment ecosystem and now with a smaller chance of being assimilated into it. The comparable actions taken against creators of remediatory work such as mods, machinima and fan-films reaffirms the argued belief that Nintendo’s aggressive attempts at enforcing the full extent of its intellectual property rights are misguided and a significant detriment to the beloved company’s reputation and consequently the exposure of its brands.


 

References:

The technical, practical and communal necessities for creating a machinima film

My original project began with the goal of producing an original, quality ‘machinima’ film that Jenna Ng (2013, p. xiv) defines as ‘films made by real-time three-dimensional computer graphics rendering engines’. The aim of this project was two-fold: First, I was displeased by the apparent semantic hijacking of the term ‘machinima’ which was once understood as the practice of using games as a medium for film making, but is now synonymous with the media outlet ‘Machinima, Inc’ known instead for creating news videos and commentated gameplay which resonates with Jenkins’ assertion that ‘our core myths now belong to corporations, rather than the folk’ (Jenkins, 2000, p. 69).

Having built their business on the backs of peoples machinima contributions, they’ve since branched out and left behind the machinima art form. This conceptual warping of ‘machinima’ inspired me to create a ‘proper’ machinima project. The second factor was the notable lack of these films based around Rainbow Six: Siege (2015). Using this game as the platform for the project sounded intriguing, as the game features a grounded human setting with which to effectively create suspense but also opportunity to exploit this grave context for some subversive, genre-bending jingoistic humour to avoid being taken too seriously.

With this idea in mind I started work on the script and began establishing a theme, loose plot and the means by which to perform and record my film. Unfortunately, I encountered numerous set-backs and issues which brought my project to a halt. It became clear this seemingly trivial task had become too problematic to continue and so I decided instead to write about the issues I encountered.

To achieve this, I’ll align my experience attempting to create a machinima film with practices and paradigms necessitated for machinima making as supported primarily by readings in Understanding Machinima (2013) and Leo Berkeley’s Situating Machinima in the New Mediascape (2006). As the inciting factor, Siege will be referred to as an indicator of machinima compatibility due to its obstinate incompatibility. With this analysis I will explore what makes a game machinima-able by outlining the technical, practical and communal necessities for creating a machinima film.

LaPensée & Lewis suggest that by the simple act of ‘adapting gameplay to the cinematic form’ we are subverting the expectations of the game’s makers, and no longer playing a game as much as with the structure of a game (2013, p. 196). Despite undermining the expectations of developers, games at least need to be amenable to film-making and facilitate it however possible. For example, Blizzard’s games usually include the ability to ‘control camera and [export] video to digital editing software’ (Barwell & Moore, 2013, p. 210), allowing the player greater control over shots and a smoother transition to the editing phase.

Siege features neither of these abilities, already severely limiting its filmic potential. Leo Berkeley, as a long time filmmaker but an outsider to the medium of video games, applauds the breadth of camera options available to machinima makers stating that ‘observing the action from a high angle has some advantages’ (2006, p. 72) which are possible but rarely used in traditional film production. This dexterity holds a marked advantage over ‘real’ filming, and games with this ability, such as World of Warcraft (2004), are immediately more suitable for generating machinima works. Danilovic (2013, p. 178) supports this notion, claiming that ‘unusual camera angles portray powerful emotional landscapes and make compelling aesthetic statements’ but this is harder to accomplish without basic control over camera movement and placement.

According to Jenkins ‘modding’ not only ‘extends the game’s commercial life’ (2001, p. 67)  and helps foster a creative community, but it can completely eschew the intended purpose of a game by removing ‘restrictions on camera movements, lighting, and other production elements’ as argued by Barwell & Moore (2013, p. 221).

Depending on the genre of movie you’re making, the ability to, for example, remove HUD elements from the film-maker’s UI can be of paramount importance to the sense of immersion and tone. Second Life (2003) fully supports modding, and so the community has been permitted to produce add-ons like ‘The Eye’ (2016) which hides the user’s avatar and name-tag.

This is acknowledged by Barwell & Moore who applaud Second Life machinima in particular because of its ‘extensive amount of modification owing to the user-generated nature of the virtual world’ (2013, p. 197). Siege does not allow for user-generated content, complete disabling of HUD, or mods of any kind which greatly limits its thematic scope and community engagement to what’s already been established. There’s little opportunity for remediation; limited to cinematic action sequence as it’s only practical application caused by the regulating of options. Indeed, my multiple inquiries to Ubisoft and individual developers via Twitter regarding the broken ‘widescreen letterbox’ feature in Siege was met with silent indifference.

That said, Second Life’s popularity as a medium for machinima films can be attributed to a number of features. The lack of explicit goals makes this genre of simulation games cinematically malleable; they ‘present novel and unusual ways of looking at animated bodies, identities, stories, and worlds’ (Danilovic, p.184) by virtue of their contextual flexibility and accessibility.

Combined with a dynamic economy and situation within a real, albeit virtual, society, Berkeley states that this creates an invaluable ‘potential for uncertainty and unpredictability’ (2006 p.73) which can be a unique and envied quality of narrative storytelling. The narrative driving his machinima film Ending with Andre (2005) was a direct result of unscripted AI behaviour intruding the ‘actor’s life, depicting ‘an angry man dressed in black’ hounding the protagonist. Berkeley was able to use this random encounter to weave a plot that portrays an abusive ex confronting the character.

This scene was possible due to the game’s random nature, making it possible to ‘follow a script but also… improvise and adapt’ as LaPensée & Lewis propose (2013 p. 200), and despite the limited ‘expressive possibilities of animated game characters compared to human actors’ and inability to lip-sink that Berkeley faced during production, when edited and framed by narration it achieved surprisingly emotional moments (2006, p. 72).

Alternatively, Siege has somewhat limited room for chance events that only encompasses situational occurrences akin to a poor grenade toss resulting in the death of the thrower. There isn’t, as LaPensée & Lewis remark, much in the way of ‘unique combinations of opportunities for creative remediation’ (2013, p.188) owing to the predictable AI and fundamentally unavoidable first-person perspective and associated tropes.

Unfortunately, uncontrollable factors can also be a detriment. Barwell & Moore caution that choosing to film in an ‘MMO’ environment such as World of Warcraft ‘leaves open the possibility of interference from players not involved’ with the machinima process (2013, p. 217). This is a problem that’s largely dependent on both the genre of game, and any restrictions barring custom game sessions which could be used to minimise interference by other individuals. To its merit, while other games can suffer from this, Siege allows for custom servers that don’t require a minimum number of participants and can be accessed online, not just via the same network.

As machinima practitioners rely on ‘wide dissemination of their work across the Web’, an online connection to servers isn’t an unthinkable prerequisite (Danilovic, 2013, p. 184). However, the ability to access a game in the event there’s no internet connection so the ‘machinimator’ can continue recording is appreciated. Siege prevents prospective machinimists access to the entirety of its gameplay, including single-player campaign, should they not have the means to perform mandatory updates.

While it’s anticipated that ‘game manufacturers [will] regularly update the game world’, it should not unnecessarily ‘disrupt the machinima production process’ (Barwell & Moore, p.217). Despite this, Siege’s offline modes I just finished lauding are rendered inaccessible when the game’s own client repeatedly crashes and cannot complete the update. Therefore, a game like The Sims (series) with the ability to function offline should be considered as a potential candidate even if the performance is more akin to janky ‘virtual puppetry’ (Nitsche, 2005) than something more natural like the animations that could be captured with Siege.

Ultimately, the factors discussed to assist would-be machinima makers in choosing a game do minimise the ‘intense and meticulous labour… concentration, and organization’ (Ng & Barrett, 2013, p. 234) necessary while film-making, however one must recognize the external factors that can determine either the success or failure of a project. Danilovic makes sure to point out the ‘technical quirks of shooting with capture screen software’ that are distinct to working in a digital setting. These range from choosing compression formats or codecs and sorting out frame rate issues, to managing the ‘excessive’ storage requirements for raw video files (2013, p. 183).

For their part, Barwell & Moore recognize machinima as a ‘translocation of various forms of filmmaking skills’ (2013, p.217) and the required skillset necessitates the ability to navigate any technical pitfalls. If the creation of machinima is the ‘employment of wit, subversion, and mischief’ as Ng & Barrett suggest (2013, p. 232), the post-production and composition can be exercises in tedium, restraint, and compromise. LaPensée & Lewis, however, remark on the relative flexibility machinima possesses throughout the entirety of the production process compared to film or television however (2013, p. 200).

Lastly, machinima produced with footage from a commercial video-game, or what Jenna Ng dubs as ‘first wave machinima’ (2013, p. xvi) while detractors might write-off as ‘low culture’ film (DeLappe, 2013, p. 164), is contingent on the state of a game’s community.  If the developer has continually supported the game and empowered users, as is the case with Linden Lab’s Second Life, then there likely exists both encouragement to produce fan material and a passionate community interested in consuming it. Unfortunately, while Siege has a somewhat active player following, publisher Ubisoft does little to encourage fan work based on the game or promote existing examples.

Evidently, there’s a multitude of factors that contribute to the machinability of a video-game. Some are embedded in the core design decisions of the game like being able to willingly change cosmetic details and remove the HUD, while other hurdles can sometimes be surmounted via the approval of modding to add functionality that assists production i.e., making both the user and their name invisible or completely overhaul the visual theme.

Similarly, the goalless nature of simulations in particular can lead to unscripted and unrepeatable situations that are a distinctive characteristic of video-games as a platform. As a completely digital medium, the need for updates is inevitable yet can be an unexpected hindrance should you choose to film with a game that requires an internet connection. To further complicate matters is the post-production and finalising the film with technical considerations like frame-rates and codecs which require a basic familiarity with the editing software and production pipeline.

Rainbow Six: Siege fails to comply with all of these outlined concerns and is ultimately unfit for adaption to the machinima art form, with the coup de grâce being the apathetic community and lack of promotion or encouragement by Ubisoft.

UPDATE 30/9/17: With the renewed interest in Siege from the average gamer, the Source Filmmaker suite has kick-started a number of machinima films such as this. The accessibility of this tool-set doesn’t require the base game to make films, so restrictions in the original game’s environment can be by-passed.

References:

  • Berkeley, Leo (2006), ‘Situating Machinima in the New Mediascape’, International Journal of Emerging Technologies and Society, Vol 4. no. 2, pp.65-80.
  • Barwell, G. and Moore C. 2013, ‘World of Chaucer: Machinima and Adaptation’, in Understanding Machinima: Essays on Filmmaking in Virtual Worlds, edited by Jenna Ng, Continuum: London.
  • Jenna, M. and Barrett J. 2013, ‘Introduction’ & ‘A pedagogy of craft: Teaching Culture Analysis with machinima’, in Understanding Machinima: Essays on Filmmaking in Virtual Worlds, edited by Jenna Ng, Continuum: London.
  • LaPensée, E. and Lewis JS. 2013, ‘Call it a vision quest: Machinima in a First Nations context’, in Understanding Machinima: Essays on Filmmaking in Virtual Worlds, edited by Jenna Ng, Continuum: London.
  • Danilovic, S. 2013, ‘Virtual lens of exposure: Aesthetics, theory, and ethics of documentary filmmaking in Second Life’, in Understanding Machinima: Essays on Filmmaking in Virtual Worlds, edited by Jenna Ng, Continuum: London.
  • DeLappe, J. 2013, ‘Playing Politics – Machinima as live performance and document’, in Understanding Machinima: Essays on Filmmaking in Virtual Worlds, edited by Jenna Ng, Continuum: London.
  • Jenkins, H. (2001) ‘Convergence? I Diverge’, Technology Review, vol. 104, no. 5, p.93.
  • Nitsche, M 2005, ‘Film Live: An Excursion into Machinima’, Developing Interactive Narrative Content, vol. 103, no. 2, p. 103, viewed 29/5/16,
    < http://www.lcc.gatech.edu/~nitsche/download/Nitsche_machinima_DRAFT4.pdf >
  • Jenkins, H. (2000) ‘Digital Land Grab: Intellectual Property in Cyberspace’ Technology Review, vol. 103, no. 2, p.103.
  • Magic Emerald 2016, [ The Eye ] Invisibility HUD Hide Avatar & NameTag, Second Life Marketplace, viewed 29/5/16,
    < https://marketplace.secondlife.com/p/The-Eye-Invisibility-HUD-Hide-Avatar-NameTag/4393940?id= >