The race toward achieving AGI (artificial general intelligence) continued apace with what felt like a monumental week in the rapid development of AI.

From Apple giving us a taste of its Intelligence to huge advances in AI-generated video, let’s take a look at some of the top AI stories from this week.

Apple Intelligence soft launches

It was an eventful week for the Cupertino-based device manufacturer. We saw the public debut of iOS 18, WatchOS 11, and MacOS 15, immediately followed by the iPhone 16 and Apple Watch 10 going on sale, as well as Apple rolling out its first update to the new OS with 18.1 beta.

The beta doesn’t offer the AI’s full feature suite — we don’t expect to see that until 18.1’s official release in October — but it is enough to give interested users a taste of what the generative AI agent will soon be capable of.

Though, from what Digital Trends already seen, Apple Intelligence is going to likely need more refinement and polish before it’s ready for the public.

Lionsgate partners with Runway to train AI video models

Weird, I could have sworn last summer’s Hollywood writers strike happened specifically in opposition to Hollywood’s ill-considered embrace of generative AI. That collective action has apparently done little to dissuade Lionsgate from jumping right back on the AI bandwagon, which announced this week that it is partnering with Runway, makers of the Gen-3 Alpha video generation model.

The agreement will see the two companies collaborate to develop and train a video generation model using Lionsgate’s expansive catalog of film and TV content. The two plan to use it to “develop cutting-edge, capital-efficient content creation opportunities,” which we all know is the hallmark of great cinema and not a poorly conceived attempt to disenfranchise the thousands of storyboard artists, lighting and effects designers, actors, musicians, and others that perform the actual labor of producing movies and TV series by replacing them with a slapped-together generative AI.

Snap releases new, gigantic, Spectacles AR glasses

Snap keeps trying to make AR glasses a thing. This week the company released the fifth and latest iteration of its Spectacles AR glasses line. The new hardware offers a wider field of view and a display that appears similar to “a 100-inch display 10 feet away,” while SnapOS and the associated smartphone app have both received significant upgrades over their previous versions. Snap is also reportedly teaming with OpenAI to bring “cloud-hosted multimodal AI models” to the smart glasses.

The new specs also weigh a hefty 226 grams, which is over 100 grams more than last year’s edition, and look like something Edna Mode would wear. They’re currently only available to developers who shell out $99/month for program access, and there is no word yet on when a consumer version will be released.

YouTube’s new AI tools do most of the content creation process for you

In an effort to lower the barrier to entry for new content creators and better compete with short form video platforms like TikTok, YouTube introduced a bevy of new AI-enhanced production tools this week. Google announced Wednesday at its Made on YouTube event in New York City that DeepMind’s Veo video generation model will be incorporated into YouTube Studio. The model can generate six-second clips in 1080p resolution and a wide variety of cinematic styles, from only a text prompt.

The company is billing these new features as a “brainstorm” assistant that can suggest topics for the video, as well as generate a title, thumbnail, and the first few lines of the script. Users will also be able to use Veo in conjunction with Dream Screen, which generates AI background images. You’ll be able to create a static background with Dream Screen then animate it using Veo.

Coincidentally, did you know that having a chatbot write a 100-word email for you consumes the equivalent of three bottles of water and 14 LED light bulbs running for an hour? Maybe try using that noggin of yours to brainstorm some original ideas instead of boiling lakes to hear a large language model’s recursive suggestions.

Runway’s Gen-3 Alpha now offers video-to-video generation

Runway Gen-3 Alpha just leveled up with Video-to-Video

Now you can transform any video's style using just text prompts at amazing quality.

10 wild examples of what's possible:pic.twitter.com/onh12zCzpI

— Min Choi (@minchoi) September 15, 2024

Before it announced its partnership with Lionsgate, Runway started the week by rolling out a new feature for its Gen-3 Alpha video generation model: the ability to change the cinematic style of any video through text prompts. AI enthusiasts are having a field day with the new tool.

You can see the technology in action in the social media post above. Runway also debuted an API this week that will enable third-party developers to incorporate the video model into their own apps, systems, and devices.






Share.
Exit mobile version