Insights & Analysis: Publicly Traded Companies in Q3 2023
Every quarter, business analyst Matthew Scott Goldstein details and shares his insights on what he saw happen with publicly traded companies across advertising, technology, and media.
Top-Level Insights for Publicly Traded Companies Q3 (2023)
Macro Environment: What We Saw in Q2 2023
- Google’s Expanding Influence: Google’s advancements, position it as a pivotal figure in the AI-centric transformation of journalism. But there’s more to Google’s role in this space than just AI tools. With their dominance in search, they’ve profoundly influenced how users find and interact with content. The rise of search generative experience (SGE) answers means users often get the information they need directly from the search results, eliminating the need to visit the original publisher’s website. Front door traffic becomes more important than ever. As an example, no publishers are currently blocking Google but ~26% of the top 1000 publishers have blocked Chat GPT. I still believe, once SGE is fully rolled out by Google, many Publishers traffic will decrease 25-35%, not good.
- Publishers Should be Paid for LLMs: OpenAI, Google, Microsoft and Adobe have met news executives in recent months to discuss copyright issues around their AI products. Publishers can explore partnerships or legal avenues to protect their intellectual property rights and ensure they are fairly compensated for the value their content contributes to AI development. This battle between big tech and tens of thousands of individual publishers does not seem like a fair fight, just saying.
- Redefining Editorial Roles: The future of journalism will revolve more around idea generation than traditional writing. With AI handling the bulk of the content creation, the emphasis for journalists and editors will pivot to curating and conceptualizing, infusing AI-produced content with human touch, emotion, and integrity. Though GEN AI does make mistakes so editors will become more important than ever.
- Economic Impact of Publishers: The advent of AI in publishing isn’t just about efficiency and automation; it has considerable economic ramifications. While AI might bring cost efficiency in some areas, it could necessitate substantial investments in others, especially in technology, training, and oversight. Additionally, this raises questions about the future job market for traditional publishers.
- The Landscape for Smaller Publishers: As the media giants are quick to harness the power of AI, what about smaller publishers? When will new publishers like Semafor or The Messenger reinvent themselves to have a complete Gen AI bot like interface, that could be exciting.
- The Vitality of Large Publishers: Even as the landscape changes, large publishers will remain essential. Their capacity to deliver original content and their established reputation ensure that they will forever be the cornerstone of journalism. Plus, Google needs these publishers to help build their LLMs.
- The Vertical Video Revolution: Platforms like TikTok and Instagram are already reshaping how we consume content, with vertical video reigning supreme. Generative AI, given its prowess, is poised to lead this visual narrative transformation.
- Embrace or Face Extinction: If publishers don’t recognize and adapt to the transformative power of generative AI, they may find themselves struggling to stay relevant or, worse, facing obsolescence.
Top Generative AI stories of the quarter: summary
AI in Media and Publishing:
- Ten major organizations, including The Associated Press and Gannett, urge AI regulations to safeguard journalism.
- AI is influencing media companies’ strategies, with companies like Bustle Digital Group focusing on AI-guided content and others like Axel Springer and Insider experimenting with AI in newsrooms.
- Controversies emerge as some media outlets increase AI-produced stories, while others express concerns about AI’s impact on journalism.
Tech Giant Developments:
- Google is expanding the capabilities of Bard by integrating the chatbot with Gmail, Maps, YouTube, and other apps.
- Google has given a small group of companies access to an early version of its highly anticipated conversational artificial intelligence software
- Google launched Search Generative Experience (SGE) is a set of search and interface capabilities that integrates generative AI-powered results into Google search engine query responses.
- Amazon partners with AI company Anthropic, investing up to $4 billion.
- Apple continues to invest in AI, despite perceived lagging behind competitors.
- Bing introduces AI features but struggles to challenge Google’s dominance.
- Google updates its SEO guidelines to embrace AI-generated content and tests an AI tool for news creation.
AI Copyright Issues:
- High-profile authors, including John Grisham and George R.R. Martin, sued OpenAI over copyright infringements.
- The Associated Press establishes a clause to renegotiate its licensing agreement with OpenAI in case other publishers get better deals.
- Many lawsuits against OpenAI emerge over the use of copyrighted training data, with companies arguing that AI’s ability to summarize news reduces traffic to websites.
AI in Business and Product Updates:
- Amazon sets a publishing limit on its platform due to AI-generated content influx.
- Artifact offers a text-to-speech feature powered by Speechify with premium celebrity voices.
- OpenAI partners with the American Journalism Project to explore AI’s role in local journalism.
- AI startup Writer Inc. raises $100 million to assist businesses with content writing using language models.
AI Integration in Other Companies and Platforms:
- Spotify introduces an AI feature for podcast translation.
- Getty Images releases a generative AI tool for image creation from text.
- Zoom focuses on its AI companion, ZoomIQ, for meeting summaries.
- Microsoft announces an AI subscription service for Microsoft 365.
AI Principles and Regulations:
- 26 journalism and publishing organizations release a global set of AI principles focusing on intellectual property, transparency, and fairness.
- A Capitol Hill discussion emphasizes the need for AI regulations.
- White House announces that major AI companies, including OpenAI, Amazon, and Google, commit to setting voluntary safeguards for AI development.
Concerns and Challenges:
- News Corporation CEO, Robert Thomson, expresses concerns about AI causing job losses in the news sector.
- AI models’ ability to use vast amounts of free online data poses challenges for content creators.
- The New York Times considers legal action against OpenAI over intellectual property concerns and blocks OpenAI’s web crawler.
Top Generative AI stories of the quarter: Detail
According to the Wall Street Journal, a tentative labor agreement between Hollywood studios and writers would allow the studios to legally train AI models on the writers’ scripts. The deal still guarantees writers credit and compensation for their script work, even if studios partially use AI. — The Writers Guild of America and the Alliance of Motion Picture and Television Producers reached a tentative agreement on Sunday that could end the nearly 150-day writers’ strike.While details haven’t been released publicly, the WSJ reports that studios would retain rights to train their in-house AI tools on TV and movie scripts penned by writers. The studios didn’t want to give up those rights because AI platforms were already training models on scripts and similar materials. AI tools could be used for tasks ranging from script summarization to special effects and promotional marketing, according to the WSJ.
OpenAI has announced an update to ChatGPT that will let the AI bot talk out loud in five different voices — Think Apple’s Siri or Amazon’s Alexa except…not. The natural voice, the conversational tone and the eloquent answers are almost indistinguishable from a human at times. Remember “Her,” the movie where Joaquin Phoenix falls in love with an AI operating system? That’s the vibe. “It’s not just that typing is tedious,” said Joanne Jang, a product lead at OpenAI, which is rolling out the update in stages. “You can now have two-way conversations.”
Amazon to Invest Up to $4 Billion in Anthropic as AI Arms Race Escalates — Deal, which includes broad partnership, follows Microsoft’s multibillion-dollar stake in rival startup OpenAI. Under the strategic collaboration, Amazon will incorporate Anthropic’s AI technology into its products. Amazon cloud customers and engineers will gain early access to Anthropic’s technology, including model customization and fine-tuning capabilities. Meanwhile, Anthropic said it will use custom Amazon’s Trainium and Inferentia chips to build, train, and deploy its foundation models for AI applications. Anthropic will receive a financial boost to cover the substantial expenses of training and operating large AI models. The startup has also named Amazon Web Services (AWS) as its primary cloud provider.
Meta plans to launch personality-driven AI chatbots on its platforms. The “Gen AI Personas” are set to launch across Instagram, Facebook, and WhatsApp, targeting Gen Z users, who are the most frequent users of ChatGPT and could be more likely to embrace Meta’s rival chatbots.
Spotify Taps AI to Replicate Podcaster Voices — Spotify has unveiled a new artificial intelligence-powered feature that translates podcasts into different languages using the host’s voice.
Stock photo platform Getty Images introduced a generative AI tool trained exclusively on its own licensed content. The “commercially safe” tool generates images from text prompts, providing customers with a royalty-free license and protection against copyright lawsuits. — The “Generative AI by Getty Images” tool is built on Nvidia’s Edify text-to-visual AI model and trained on a portion of Getty’s roughly 477 million assets. The tool restricts the creation of images depicting public figures or mimicking a living artist’s style. All images generated by the tool are watermarked as AI-generated. In Feburary, Getty Images sued Stability AI, the creator of the Stable Diffusion image generator, alleging copyright infringement. A study of 12 million photos from Stable Diffusion’s training dataset indicated that more than 15,000 of those photos came from Getty.
AI-powered products announced by Google, Amazon, and OpenAI have displayed numerous flaws and glitches, suggesting a rushed development and rollout, according to The Washington Post. Despite the risks, companies are rapidly deploying AI amid intense competition in the generative AI sector, the publication reports. According to the Post, Google’s Bard chatbot, which can summarize Gmail and Google Docs files, has made up fake emails that were never sent, per user reports. After OpenAI introduced the Dall-E 3 image generator this week, social media users noticed missing details in images during official demos. Amazon also unveiled a conversational mode for Alexa but encountered problems during demos, including suggesting a museum in the wrong location to The Washington Post. The companies have stressed that their AI systems remain a “work in progress” and said they have included safeguards, like those that prevent the generation of offensive or biased statements
Amazon has created a new rule limiting the number of books that authors can self-publish on its site to three a day, after an influx of suspected AI-generated material was listed for sale in recent months. The Co Announced the new limitations in a post on its Kindle Direct Publishing forum, which allows authors to self-publish their books and list them for sale on Amazon’s site.
OpenAI has unveiled Dall-E 3, the latest version of its text-to-image tool that uses its AI chatbot ChatGPT to help fill in prompts. Dall-E 3 will be available to ChatGPT Plus and Enterprise customers in Oct. 2023, via the API. Users can type in a request for an image and tweak the prompt through conversations with ChatGPT. OpenAI said the latest version of the tool will have more safeguards such as limiting its ability to generate violent, adult, or hateful content.
The race to bring A.I. to the masses heats up — Months after OpenAI’s ChatGPT seized the world’s imagination and spurred a global debate about how to regulate artificial intelligence, use of the chatbot has dropped sharply. (Though that may just be tied to students having been on summer vacation.)Still, rivals are betting heavily on A.I. and a flurry of announcements by tech giants this week shows how they’re racing to dominate one of the most transformative technologies in decades. Google is putting its Bard chatbot in Gmail, Google Docs, YouTube and more, hoping that tying its technology to some of the world’s most popular tech services will speed its adoption. Though the company was a pioneer in A.I. research, it is now under pressure to catch up to rivals like OpenAI, with Bard trailing ChatGPT in usage. But The Times’s Kevin Roose found that the newly revamped Bard is “a bit of a mess,” with the chatbot inaccurately summing up emails and inventing facts (a phenomenon known as hallucinating). “We know it’s not perfect,” Jack Krawczyk, the head of Bard, told Kevin. Amazon is using A.I. to make Alexa smarter. The company is applying the latest tech to its voice assistant, which it says will allow Alexa to understand more conversational phrases and handle multiple requests. (For instance: Instead of specifically telling Alexa to activate a connected thermostat, users will soon be able to simply say, “Alexa, I’m cold,” Dave Limp, Amazon’s head of devices and services, told The Verge.) While Alexa helped ignite interest in “smart” assistants a decade ago, it hasn’t advanced much since then, particularly in user request comprehension. Giving the service new brains, the company hopes, will help make up for lost ground — and, Limp suggests, get consumers to eventually pay a subscription for that enhanced version. Microsoft is expected to introduce A.I. features for some of its most popular software at an event today. These could include a Copilot personalized assistant for Windows that’s similar to ones tied to Office 365 and the GitHub coding tool. The company may also announce new A.I. capabilities for its Surface line of computers. Putting more A.I. in consumer hands carries risks, including for privacy — Google warns users against giving Bard data that they wouldn’t want a reviewer to see — and for hallucinations. Those concerns are top of mind for regulators worldwide as they weigh new rules. In other A.I. news: Prominent authors including the novelists Jodi Picoult and John Grisham, have sued OpenAI over their works being used to train ChatGPT without permission or compensation. Speaking of ChatGPT, the chatbot can now generate shockingly detailed images.
John Grisham, Jodi Picoult and George R.R. Martin is among 17 authors suing OpenAI for “systematic theft on a mass scale,” the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission. In papers filed Tuesday in federal court in New York, the authors alleged “flagrant and harmful infringements of plaintiffs’ registered copyrights” and called the ChatGPT program a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”The suit was organized by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand among others. “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” Authors Guild CEO Mary Rasenberger said in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”
Google is expanding the capabilities of Bard by integrating the chatbot with Gmail, Maps, YouTube, and other apps. Bard was released in March. In August, the chatbot had around 183 million web visits, far below ChatGPT’s 1.5 billion desktop and mobile web visits.
Generative AI in publishing: Q3 2023
Bustle Digital Sees AI as a Gift to Publishers — BDG isn’t using artificial intelligence to generate content. But the rise of technology in the media is reshaping its editorial strategy. BDG isn’t using artificial intelligence to generate content. But the rise of the technology in the media ecosystem is reshaping BDG’s editorial strategy, Charlotte Owen, editor in chief of Bustle and Elite Daily at Bustle Digital Group, explained onstage on Monday at the Digiday Publishing Summit in Key Biscayne, Fla.“AI has given all publishers a gift because it’s really given us a north star of where we shouldn’t be,” Owen said. “We have to un-romanticize the way we talk about AI.”The lifestyle publisher is doubling down on pushing out fewer stories that are focused on original reporting and personal essays to differentiate its articles from SEO-driven content that could be replicated by generative AI technology, Owen said. “I don’t think there’s a place for a robot in a conversation between two moms,” she said, as an example. And if one of her writers is writing a story that could’ve been written by generative AI, “they’re writing the wrong story,” she said. This strategy has led to traffic increases (she did not share how much) on original reported pieces, such as recent digital cover stories featuring celebrities like actress Rachel McAdams, Owen said. “We think about what we can be doing that AI can’t,” she said. “But that doesn’t mean we’re shoving our head in the sand.”
AI Startup Writer Raises $100 Million to Pen Corporate Content — The company’s technology can write and summarize a wide range of text. Writer Inc., a startup that helps businesses write and summarize a wide range of content, has raised $100 million in a deal that values the company at more than $500 million.Led by Chief Executive Officer May Habib, Writer is the latest artificial intelligence upstart to use large language models in a corporate setting. LLMs are trained on large swaths of online text so they can generate writing that sounds as if it were produced by a human. The startup lets companies to use AI for functions like producing product descriptions, job listings and social media posts, along with analyzing data and automating tasks.
Google has reportedly given a small group of companies early access to test its conversational AI system Gemini, as the buzz around the LLM rival to OpenAI’s GPT-4 continues to grow. Gemini is able to power chatbots, summarize text, write content, and more — with companies testing a smaller version of the full model. Google aims to make Gemini widely available via its Google Cloud platform, competing with OpenAI’s API access. The search giant also recently added AI features to Search and enterprise tools — but Gemini is its biggest generative AI play yet. An anonymous source claimed that Gemini will be trained on YouTube video transcripts (per Android Police). In the LLM frenzy, the winner will likely have access to the largest and richest training dataset. And if Google is training Gemini across YouTube, Google Search, Google Books, and Google Scholar — it will surely give GPT-4 a run for the top spot.
Google Updates its SEO Playbook for Content Generated With AI — Google has dropped “content written by people” and now says it looks for quality content—no matter how it was generated. Google has long preached the gospel of “content written by people, for people.” But in a recent update, the search giant is quietly rewriting its own rules to acknowledge the rise of artificial intelligence. In the latest iteration of Google Search’s “Helpful Content Update,” the phrase “written by people” has been replaced by a statement that Google is constantly monitoring “content created for people” to rank sites on its search engine. The new language shows that Google recognizes AI as a tool heavily relied upon in content creation. But instead of simply focusing on distinguishing AI from human content, the leading search engine wants to highlight valuable content that benefits users, regardless of whether humans or machines produced it. Google is meanwhile investing in AI across its products, including an AI-powered news generator service along with its own AI chatbot Bard and new experimental search features. Updating its guidelines, then, also aligns with the company’s own strategic direction. The search leader still aims to reward original, helpful, and human content that provides value to users. “By definition, if you’re using AI to write your content, it’s going to be rehashed from other sites,” Google Search Relations team lead John Mueller noted on Reddit.
Generative AI in the job market: Q3 2023
AI could potentially lead to a ‘tsunami’ of jobs losses in the news industry: News Corp. CEO — News Corporation CEO Robert Thomson sits down with Yahoo Finance Executive Editor Brian Sozzi at the Goldman Sachs Communacopia & Technology Conference as he unpacks the impact of AI on the media industry. Thomson states that AI will be “epochal” for news going on to warn that the industry could face a “tsunami potentially of job losses” due to the tech. Thomson says that with AI the danger is “rubbish in, rubbish out, and, in this case, rubbish all about” emphasizing his point that there is no management of content with AI. “Instead of elevating and enhancing, what you might find is that you have this ever-shrinking cycle of sanity surrounded by a reservoir of rubbish,” Thomson says. Thomson goes on to say that AI can’t replace editorial roles and “great writing,” but it will have an impact on roles that do things that are “replicative and iterative.” Thomson also weighs in on whether his outlets will be covering the 2024 election cycle and the potential boost in ad spending that comes with it.
Gizmodo Replaces Staff Of Its Spanish-Language Site With AI — Tech site Gizmodo has laid off the entire small staff of its Spanish-language site Gizmodo en Espanol and has reverted to AI-automated translation. according to published reports. “Hello friends. On Tuesday they shut down @GizmodoES to turn it into a translation self-publisher (an AI took my job, literally),” wrote writer Matías S. Zavia in a social media post, according to ArsTechnica. Translated articles now contain this disclaimer: “This content has been automatically translated from the source material. Due to the nuances of machine translation, there may be slight differences. For the original version, click here.” The disclaimer may be needed, given that some AI-translated articles start in Spanish and abruptly shift to English, The Verge reports.
Nothing is ready for prime time’: Journalists push back against publications’ race to have newsrooms use generative AI tools — Journalists have a message for their employers: generative AI tools are not good enough yet for writing articles. Digiday spoke to seven journalists at five digital publishers experimenting with artificial intelligence tools to find out what they thought about their organizations testing the technology to create content. All of them said they wanted their managers to proceed with caution. Their stance is that the technology is not good enough for content generation (yet), and ultimately they’re concerned that the adoption of AI for editorial purposes is a threat to their jobs. “I’m not sure that the technology is ready [in] the way that managers of newsrooms think it is,” said one G/O Media employee, who requested anonymity in order to speak freely. “I don’t think any of us are very fired up about being the guinea pigs [and] having the outlets that we represent being the guinea pigs for this.” Some publishers like BuzzFeed, Forbes, Insider, and Trusted Media Brands created task forces earlier this year to oversee AI initiatives at their respective companies, including with representatives from their editorial teams. But some of the journalists Digiday spoke with said their employers have not included them in conversations about how their newsrooms would use generative AI tools. They think that’s a mistake and why there have been a number of recent snafus in the news. Two employees at Insider that spoke with Digiday said they have been encouraged by management to do their own tests with generative AI tools to help them work more efficiently.
Editorial ‘co-pilots’ and monetising archives: Generative AI in action at ITN, Future, Bauer, AP and others — Including testing paywall copy, training chatbots on one expert site and translating news. At the Future of Media Technology Conference 2022, there were no mentions of generative AI. By contrast, this year there were two panels explicitly devoted to the subject and countless more discussions throughout the day. Executives and journalists from publishers and vendors including ITN, Sky News, The Guardian, AP, Future, Bauer, GB News, Mediahuis, ArcXP and Affino shared how they have begun to use generative AI and what they see in the immediate future. Future chief executive Jon Steinberg said his company had experimented with generative AI by creating chatbots that have read the entirety of a site, for example computing product review title Tom’s Hardware, and allowing users to ask questions such as “What CPU is ideal for this computer case?” “So there’s no hallucination, there’s no false data,” Steinberg said. “It’s read expert content and it’s coming up with a result from that.” Future has also been using generative AI for productivity enhancement as “an editorial co-pilot”. “We are not having AI write articles,” Steinberg said. “We’re having AI assist editors in pulling together things like product specifications, or editing video to different formats so that we can take video that’s on the site and post it to social.”
Tech leaders and lawmakers gathered on Capitol Hill today to discuss AI regulation, with consensus on the need for rules but differing opinions on the approach. The closed-door forum, hosted by Senate Majority Leader Charles Schumer, drew a number of tech leaders, including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg, and Alphabet CEO Sundar Pichai. Musk stressed the need for a U.S. “referee” to ensure AI safety, likening it to sports. The forum took place in the historic Kennedy Caucus Room, known for its role in Senate investigations related to the Titanic and Watergate. Over 60 senators joined the discussion, with all acknowledging the general need for government regulation of AI but unsure how long it would take or what it would involve. However, some lawmakers were concerned about the closed-door nature of the meeting, which differed from prior public hearings with tech executives. Sen. Elizabeth Warren expressed frustration, noting that individual senators couldn’t ask questions during the morning session, which Schumer moderated. Schumer said AI legislation should ideally take months, not weeks or years. “If you go too fast, you can ruin things,” Schumer said, adding that the European Union went “too fast.” OpenAI CEO Sam Altman, who has cautioned about AI risks, praised Congress for its attention and commitment, saying, “I think they want to do the right thing.” When asked if Americans should trust tech companies for their safety, Palantir CEO Alex Karp replied, “Yes. Because we’re good at it.” Musk suggested an AI regulatory agency similar to the FTC or FAA.
Generative AI has spawned thousands of new products. But outside of ChatGPT, what are everyday consumers using? What’s growing, and what has flattened? We crunched the numbers to find the top 50 consumer web products by monthly visits – here’s our learnings ChatGPT currently dominates, representing 60% of all traffic — with Character.ai at a distant second in the usage rankings. The rest of the top 10 include Bard, Poe, QuillBot, Photoroom, Civitai, Midjourney, Hugging Face, and Perplexity. Surprisingly, many “GPT wrappers” made the top 50 — with a roughly evenly split between proprietary models, fine-tuned open source models, and public third-party models. Despite hype, even the biggest AI products like ChatGPT are still small vs mainstream apps like YouTube and Facebook.
Google Nears Release of Gemini AI to Challenge OpenAI — Google has given a small group of companies access to an early version of its highly anticipated conversational artificial intelligence software, according to three people with direct knowledge of the matter. Giving outside developers access to the software, known as Gemini, means Google is getting close to incorporating it in its consumer services and selling it to businesses through the company’s cloud unit. Gemini is intended to compete with OpenAI’s GPT-4 model, which has begun to generate meaningful revenue for the startup as financial institutions and other businesses pay to access the model and the ChatGPT chatbot it powers.
Much has been written about Google Search Generative Experience (SGE), but the most important question remains unanswered: If and when Google SGE goes live, how will it impact organic traffic? Will our traffic drop, and if so, by how much? And what can we do about it? This article presents a framework that can provide clear answers to these questions. Using this framework, we have: Estimated SGE traffic drops for 23 websites. Discovered an optimization technique that helps pages rank in the SGE snapshot carousel. Carried out three SGE recovery projects, in which we have mitigated (at least partially) the expected traffic drops from Google SGE. We propose an open SGE Impact Model, which anyone can implement using an Excel spreadsheet. It will help you estimate, across a range of possible outcomes, what will happen to your traffic when SGE goes live. Below is an example of a real website, expected to lose between 44-75% of its organic traffic due to Google SGE.
Despite introducing the new AI-powered Bing and Bing AI Chat, Microsoft’s search engine market share has remained largely unchanged this year, according to Statcounter’s latest data. Microsoft announced the new AI-powered Bing search engine in February. As of August, Bing’s global market share was still around 3%, unchanged from January, according to Statcounter. Analytics from Similarweb indicate Bing’s monthly visitors were around 1% of Google’s in both January and July. Bing’s U.S. share was also 6.47% in July, compared to over 7% in 2022. Microsoft deems the new Bing a success and challenges external data, saying that third-party companies fail to account for all users who directly access Bing’s chat page. A Microsoft spokesperson said Bing now claims over 100 million daily active users and highlighted its growth with new access points like Bing Chat Enterprise. Amid Google’s dominant hold on the search engine realm, Microsoft’s hopes of significant market share gain are facing challenges. Bing, in its competition with Google, introduced generative AI attributes like chatbots and visual search. Google also incorporates AI through its Search Generative Experience (SGE). Daniel Tunkelang, a search consultant with experience at Google and LinkedIn (now a Microsoft subsidiary), considers the new Bing “cute, but not a game changer.”
What we saw for Google vs Bing: Global search engine market share: Q3 2023
Publishing, Journalism Orgs Release ‘Global Principles’ For AI — Twenty-six publishing and journalism organizations from around the world have released a set of principles meant to guide development, deployment and regulation of artificial intelligence systems and applications. The Global Principles for Artificial Intelligence (AI) “are aimed at ensuring publishers’ continued ability to create and disseminate quality content, while facilitating innovation and the responsible development of trustworthy AI systems,” say the groups.The News/Media Alliance, News Media Association, News Publishers’ Association, Digital Content Next, the World Association of News Publishers, the European Magazine Media Association and FIPP are among the signatories. (All signatories are shown above, as well as online, along with the full principles.) The principles address issues relating to intellectual property, transparency, accountability, quality and integrity, fairness, safety, design, and sustainable development.They state that AI tools must be developed in accordance with established principles and laws that protect publishers’ intellectual property, brands, consumer relationships and investments, adding that AI systems’ current “indiscriminate misappropriation of our intellectual property is unethical, harmful, and an infringement of our protected rights.” “AI systems are only as good as the content they use to train them, and therefore developers of generative AI technology must recognize and compensate publishers accordingly for the tremendous value their content contributes to the development of these systems,” states News/Media Alliance President and CEO Danielle Coffey.The principles state that developers, operators, and deployers of AI systems should:
- Respect intellectual property rights protecting the organizations’ investments in original content.
- Leverage efficient licensing models that can facilitate innovation through training of trustworthy and high-quality AI systems.
- Provide granular transparency to allow publishers to enforce their rights where their content is included in training datasets.
- Clearly attribute content to the original publishers of the content.
- Recognize publishers’ invaluable role in generating high-quality content for training, and also for surfacing and synthesizing.
- Comply with competition laws and principles and ensure that AI models are not used for anti-competitive purposes.
- Promote trusted and reliable sources of information and ensure that AI generated content is accurate, correct and complete.
- Not misrepresent original works.
- Respect the privacy of users that interact with them and fully disclose the use of their personal data in AI system design, training, and use.
- Align with human values and operate in accordance with global laws.
OpenAI’s ChatGPT website experienced a third straight month of declining traffic in August, although there are signs that the decline is coming to an end, according to data from Similarweb. Global website visits to the chatbot site fell by 3.2% to 1.43 billion in August. This is a less steep drop than in June and July, when ChatGPT traffic dropped nearly 10%. In August, there was a slight increase in the number of unique visitors, and the U.S. saw a small uptick in website visits. The return of schools in September, particularly in the U.S., is likely driving the resurgence in ChatGPT’s traffic, with college-age users and younger users seeking homework assistance. The launch of the ChatGPT iOS app in May might have also redirected some website traffic.
Google made a watermark for AI images that you can’t edit out –The SynthID watermark is meant to be impossible for you to see in an image but easy for the detection tool to spot. Google’s ready and willing for it to get tested and broken. |
Newspaper chain Gannett has paused the use of an artificial intelligence tool to write high school sports dispatches after the technology made several major flubs in articles in at least one of its papers. Several high school sports reports written by an AI service called LedeAI and published by the Columbus Dispatch earlier this month went viral on social media this week and not in a good way. In one notable example, preserved by the Internet Archive’s Wayback Machine, the story began: “The Worthington Christian [] defeated the Westerville North [] 2-1 in an Ohio boys soccer game on Saturday.” The page has since been updated. The reports were mocked on social media for being repetitive, lacking key details, using odd language and generally sounding like they’d been written by a computer with no actual knowledge of sports. CNN identified several other local Gannett outlets, including the Louisville Courrier Journal, AZ Central, Florida Today and the Milwaukee Journal Sentinel, that have all published similar stories written by LedeAI in recent weeks.
AI is killing the grand bargain at the heart of the web. ‘We’re in a different world.’ — The web’s grand bargain is based on the idea that content creators will share their information online if they can get traffic from consumers. This traffic can then be used to generate revenue through advertising, subscriptions, or other means. However, AI is now making it possible for tech companies to develop powerful AI models without having to pay for the data that they need to train these models. This is because web crawlers can collect vast amounts of data from the internet for free. Content creators are starting to block web crawlers, but there is no clear legal mechanism to prevent this. This is because robots.txt, the standard way to block web crawlers, is not legally enforceable.The issue is complex and there is no easy solution. Content creators, tech companies, and policymakers are all trying to figure out how to balance the needs of creators with the benefits of AI.Google Search Generative Experience officially rolls out links to webpages within answers Launching first in the U.S., Google said it will keep testing how it presents results and prioritize driving traffic to websites. Google is now rolling out links to webpages within the Search Generative Experience AI-powered answers. You will see the AI-powered overview answers have these down-arrow icons; when you click on them, Google will show you the relevant webpages used to help form that part of the answer. “Starting today, when you see an arrow icon next to information in an AI-powered overview, you can click to see relevant web pages, and easily learn more by visiting the sites. This is launching first in the U.S. and will roll out to Japan and India over the coming weeks,” said Hema Budaraju, Senior Director of Product Management at Google Search.
The world’s top websites are stepping up efforts to block AI web crawlers like OpenAI’s GPTBot, which collects data for training AI models. About 18.6% of the world’s top 1,000 websites, including Amazon and Quora, are now blocking at least one AI crawler, according to data from Originality.AI.Details: OpenAI recently launched GPTBot, its web crawler that collects data to develop and improve large language models. GPTBot gathers publicly accessible data, excluding content like paywalled material and sensitive information, to refine existing and future models, all while excluding restricted sources. Websites concerned about data scraping have the option to block the bot through methods like IP blocking and robots.txt adjustments. What the numbers say: The bot has now been blocked by major websites such as the New York Times, Amazon, Reuters, Indeed, and others. According to Originality.AI’s analysis, blocking of GPTBot on the top 1,000 websites grew from 9.1% on Aug. 22 to 12% as of Aug. 29. The Common Crawl Bot (CCBot), another web crawler, currently has a 6.77% block rate. Surprisingly, no sites currently block Anthropic AI’s crawler. Why it matters: Due to the lack of clear AI copyright rules, websites are taking their own action to prevent scraping of their content. The increasing trend of GPTBot blocking, which is rising by about 5% each week, highlights the complex interplay between AI’s advancement and ongoing concerns about content ownership. As more sites block crawlers, it could make it harder to improve AI models due to the lack of high-quality data. The Top 6 Biggest Websites That are Currently Blocking GPTBot are:
– Amazon.com – Aug 19, 2023
– Quora.com – by Aug 22, 2023
– NYTimes.com – Aug 17, 2023
– Shutterstock.com – Aug 21, 2023
– Wikihow.com – Aug 12, 2023
– CNN.com – Aug 22, 2023
The New York Times and OpenAI could end up in court. — Lawyers for the newspaper are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions. For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action. The individuals who confirmed the potential lawsuit requested anonymity because they were not authorized to speak publicly about the matter. A lawsuit from the Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI. A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.
The Associated Press has updated its standards — and will publish 10 new AP Stylebook entries — to caution journalists about common pitfalls in coverage of artificial intelligence. When the AP became the first major news organization to strike a deal with OpenAI, the ChatGPT maker committed to paying to train its models on AP news stories going back to 1985. The joint announcement also said the deal would allow the AP to “examine potential use cases for generative AI in news products and services.” But while the Associated Press has used AI technology to automate some “rote tasks” — think corporate earnings reports, sporting event recaps, transcribing press conferences, etc. — since 2014, the standards unveiled on Wednesday sound a skeptical note about using generative AI for journalism’s most essential work.
The New York Times, which earlier this month changed its terms of service to forbid the scraping of its content for use in AI training, has gone a step further: It has blocked OpenAI’s web crawler, according to The Verge. In addition, the Times is mulling a lawsuit against OpenAI, following weeks of negotiations over a potential licensing deal, NPR reports, citing unnamed sources. The Times has blocked GPTBot, the crawler recently introduced by OpenAI, since August 17, The Verge writes. Meanwhile, the negotiations with OpenAI became so contentious that lawyers for the Times are considering a lawsuit. The sides are tussling over payment by OpenAI for the right to incorporate Times content into its AI tools. In the update of its terms, the Times said it prohibits use of “robots, spiders, scripts, service, software or any manual or automatic device, tool, or process designed to data mine or scrape” its content. Zoom spotlighted its AI advancements during its Q2 FY ’23 earnings call on Monday. CEO and founder Eric Yuan said Zoom’s “aggressive roadmap” with AI aims to empower its “customers to work smarter and serve their customers better.” Yuan discussed ZoomIQ, the platform’s AI smart companion. The assistant uses generative AI to automatically provide summaries of meetings, including sessions that a user missed, and can also post the meeting recaps on Zoom Team Chat. Its AI Chat compose can draft chat messages and generate responses to colleagues based on context and tone. Meanwhile, Zoom Scheduler uses AI and multiple cameras to optimize images and angles of participants during meetings from different locations. “All of those generative AI features can make the platform not only more sticky but also more valuable,” Yuan said.
A group of 10 organizations, including The Associated Press and Gannett, is calling for a legal framework to protect journalism from unregulated AI use. In an open letter titled, “Preserving public trust in media through unified AI regulation and practices,” the signees advocate regulatory and industry action to ensure: Transparency about the makeup of training sets used to create AI models. Consent of intellectual property rights holders on copying of their content in training data. Ability of media companies to collectively negotiate with AI model developers about use of their intellectual property. Requiring that generative AI models identify AI-generated content. Mandating that generative AI model providers eliminate bias and misinformation. The group argues that AI, if left unchecked, “can threaten the sustainability of the media ecosystem as a whole by significantly eroding the public’s trust in the independence and quality of content and threatening the financial viability of its creators.” The letter also contends that AI models can disseminate content “often without any consideration of, remuneration to, or attribution to the original creators.” The open letter was signed by: Agence France-Presse European Pressphoto Agency European Publishers’ Council Gannett | USA TODAY Network Getty Images National Press Photographers Association National Writers Union News Media Alliance The Associated Press The Authors Guild
The rise of generative AI has significant implications for search engines and publishing. As consumers increasingly rely on platforms like Google and Bing for information, publishers fear being overshadowed. Key concerns involve the usage of their content to train AI models and the potential reduction in the need to visit publisher websites due to comprehensive search engine results. Google and Microsoft, aiming to provide the best search experience, need to address these issues to ensure fairness and collaboration with publishers. Publishers worry about their content being used without compensation and the potential obsolescence of their platforms. IAC Chairman Barry Diller emphasizes the need for publishers to be compensated as AI tools assimilate their work. Google and Microsoft must work closely with the industry to design a search experience that benefits consumers and drives traffic to publishers. Compensation models for ingested content and respect for publishers’ property restrictions are essential steps. Google and Microsoft’s consumer-centric approach aligns with ensuring a fair environment for publishers. Compensation for publishers’ efforts would benefit all parties involved, resulting in thriving search engines, publishers, and satisfied consumers. Collaboration is necessary to address the challenges posed by changing technology and industry standards. The focus should be on building a better future through collective efforts and adapting to the evolving landscape. SGE while browsing” was specifically designed to help people more deeply engage with long-form content from publishers and creators, and make it easier to find what you’re looking for while browsing the web. On some web pages you visit, you can tap to see an AI-generated list of the key points an article covers, with links that will take you straight to what you’re looking for directly on the page. We’ll also help you dig deeper with “Explore on page,” where you can see questions the article answers and jump to the relevant section to learn more. SGE while browsing” is designed to show AI-generated key points only on articles that are freely available to the public on the web.
Paying for training data? — There are already a bunch of lawsuits from people who think their work may be in LLM training data, and now IAC and a group of publishers are apparently thinking about demanding some very large ($bn) payments. Unlike the ‘link tax’ demands, this actually has some rational basis – if you can ask ChatGPT ‘what was the news today?’ or ‘explain what that story’s about’ and it can just tell you, it really is ‘using the news’ and not sending them traffic (and raises a lot of social and political questions too). On the other hand, if you think of LLMs as, say, ‘reasoning engines’ or some similar phrase and NOT databases, and don’t care if they know exactly who Liz Truss or Chris Christie were, then the proportion of their training data that’s actually made up by content from these companies might be tiny and OpenAI or Google could retrain without them. Never mind fair use (which might or might not apply here) – is there enough data available for training that’s entirely out of copyright for this to become moot? Conversely, if they have to pay Bary Diller, why aren’t they paying me too?
OpenAI has applied for a trademark for GPT-5, the next version of its large language model. The trademark application with the U.S. Patent and Trademark Office describes GPT-5 as “downloadable computer software” for tasks like natural language processing, text and speech generation, understanding, and analysis, including translation and transcription. In June, OpenAI CEO Sam Altman said the company hasn’t started developing GPT-5. Altman said OpenAI would not develop GPT-5 “for some time” after it released GPT-4 in March. While the trademark filing doesn’t provide a specific debut date for GPT-5, there are rumors suggesting that its training might be completed by 2024
Meta is gearing up to launch AI-powered chatbots, called “personas,” on Facebook and Instagram as soon as next month. According to the Financial Times, the chatbots will reportedly have distinct personalities, like offering travel recommendations in a surfer’s style or speaking like Abraham Lincoln. The chatbots will offer a search function, personalized recommendations, and humanlike interactive experiences. In June, app researcher Alessandro Paluzzi uncovered an upcoming “Chat with an AI” feature on Instagram, offering advice and answering questions from 30 different AI personalities while helping users compose messages. In an earnings call this week, CEO Mark Zuckerberg said the company is building AI experiences on top of its LLaMA large language model. He said AI can help people connect and express themselves in Meta’s apps. They are “creative tools that make it easier and more fun to share content, agents that act as assistance, coaches or that can help you interact with businesses and creators and more.”
Artifact just launched the ability to listen to any article on Artifact with text-to-speech AI, which is powered by Speechify. You’ll be able to access premium voices for free — including voices from Snoop Dogg and Gwyneth Paltrow. Speechify uses AI to create natural sounding reading voices and has helped millions of people as the most popular text-to-speech app available.
Apple’s progress in generative AI lags behind competitors and there is no sign that the company will launch AI services or products in 2024, according to analyst Ming-Chi Kuo. The update contradicts Bloomberg stating that Apple could make a significant AI announcement in 2024. Following the release of Apple’s quarterly financial results today, Kuo doesn’t believe AI will be brought up much during the earnings call since Apple trails competitors in the field. Kuo’s latest note says there is no indication that next-gen AI will launch in Apple’s hardware for consumers next year. Last month, reports emerged about Apple working on an “Apple GPT” chatbot and other AI projects, but the company’s strategy for consumer products is unclear. Unlike Microsoft, Google, Meta, and Amazon, Apple has been more reserved in adopting GenAI technology. While Apple has used large language models to enhance iOS, it prefers to call the technology “transformers” rather than using terms like “AI” or “GPT.”Tim Cook, Apple’s CEO, has acknowledged the importance of being deliberate and thoughtful in AI development.
Apple’s CEO, Tim Cook, confirmed that the company’s significant R&D spending is partly driven by its investments in AI. Apple’s latest earnings report shows the company has spent $22.6B on research and development so far in fiscal 2023, a $3.12B increase from where it was last year. Despite facing declining sales, Apple has been more reserved in its AI announcements compared to competitors like Meta, Microsoft, and Google, which are actively engaged in an AI arms race. In its latest earnings call, Cook emphasized that AI is critical to Apple’s future and it plans to continue investing in AI technologies, adding that the company views AI and machine learning as “integral to virtually every product.” Reports suggest that Apple is developing new AI tools, including a large language model nicknamed “Apple GPT,” while Cook noted that AI and machine learning have been integral to Apple’s products for years.
During Amazon’s latest earnings call, CEO Andy Jassy revealed that every team within the company is actively working on generative AI projects, spanning entertainment, AWS, advertising, and devices. Jassy expressed the importance of GenAI, saying it will be at the heart of operations and represent a significant investment and focus for the company.While Jassy didn’t go into specific details about Amazon’s AI projects during the call, he mentioned two areas of focus: streamlining operations for cost-effectiveness and enhancing customer experiences. Jassy acknowledged that many are familiar with generative AI in applications like OpenAI’s ChatGPT. However, he stressed its importance in backend areas, such as using it at the compute layer to train foundational models. While Amazon will create its own GenAI applications, the majority will be developed by other companies, with a positive outlook on many of them being built on AWS, he said. Jassy also highlighted Amazon CodeWhisperer, an AI-powered coding companion with promising early results in improving developer productivity
Axel Springer to Kick Off Test of AI at Insider — “We welcome everyone in the newsroom to experiment with AI whether you’re in the pilot group or not. Be aware that AI has pitfalls.” We welcome everyone in the newsroom to experiment with AI whether you’re in the pilot group or not. If you’d like to get access to our paid account for GPT-4, which is the more advanced version of the free ChatGPT, please reach out to Robin Ngai for access.
G/O’s AI Use Called an ‘Affront to Journalism’ — G/O Media’s owners and management “view artificial intelligence as a way to drastically reduce labor costs and maximize profit.” Meta plans to monetize Llama 2, its updated large language model (LLM), by charging major cloud-computing companies like Amazon and Google for reselling the service. While Llama 2 is open source, Meta stipulated that the largest cloud companies cannot use Llama 2 under a free license. Instead, the companies must establish a business arrangement with Meta, according to CEO Mark Zuckerberg. Zuckerberg said during a quarterly earnings call that major cloud companies like Microsoft, Amazon, and Google, who plan to resell Llama 2’s services, should share a portion of their revenue with Meta. He acknowledged that the immediate revenue might not be substantial, but he hopes it will grow over the long term.
Disney, Sony to Staff Up on AI During Strikes — Job listings at most major media companies show that there is a AI hiring spree going on as companies seek to understand the technology. While the future of AI in Hollywood is unclear, there is no question that the major studios and streaming services are intrigued by the technology. Job listings at almost every major entertainment company show that there is a veritable AI hiring spree going on as companies seek to understand how the technology can change their businesses.
As Publishers Seek AI Payments, AP Gets a First-Mover Safeguard — When the Associated Press was negotiating an agreement to license its content to generative-AI company OpenAI, the newswire giant had a hesitation: What if another publisher comes along and strikes a more lucrative deal? The AP built in a first-mover safeguard, often referred to as a “most favored nation” clause, that gives it the right to reset the terms if another company gets a better deal from OpenAI, according to people familiar with the agreement. News organizations are still in the early stages of evaluating generative AI tools from companies including OpenAI, Microsoft and Google, which are trained on vast amounts of internet data, including news articles. Several publishers are seeking payments for the use of their content. With no precedent in the industry, determining the fair value of what they produce isn’t straightforward. The AP was the first major publisher to strike a pact with a major AI platform, and its favored nation clause reflects the uncertainty in the industry about how much news content is worth to AI bots.
Google is testing an AI tool that can generate news content, according to The New York Times. The tool, known internally as “Genesis,” was showcased to executives from major media outlets including the Times, The Washington Post, and News Corp., owner of The Wall Street Journal. According to the Times, Google has pitched the tool as “responsible” technology that could serve as a personal assistant for journalists, automating certain tasks to free up their time. The company said the AI tool could help with headline generation and strengthen writing styles. However, some executives expressed concerns about the tool overlooking the effort required for accurate and artful news stories. While some newsrooms have already started using AI-generated content, recent backlash against publications for publishing error-prone AI-generated stories highlights some of the current limits of AI in journalism.
Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced , pledging to manage the risks of the new tools even as they compete over the potential of artificial intelligence. The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House on Friday afternoon. “We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room at the White House. “This is a serious responsibility; we have to get it right,” he said, flanked by the executives from the companies. “And there’s enormous, enormous potential upside as well.”
OpenAI and The Associated Press (AP) reached a licensing deal for the ChatGPT maker to use AP news stories to train its AI models. The financial details of the deal — one of the first of its kind for a media outlet — were not made public. Under the two-year agreement, OpenAI will license a portion of AP’s text archive dating back to 1985 to train its AI algorithms. In addition to licensing fees, AP will gain access to OpenAI’s technology and expertise. The AP says it doesn’t currently use ChatGPT or other generative AI systems to write stories. The news comes after tech giants OpenAI, Google, and others launched talks with media outlets to forge agreements to use their news content for training AI. Financial Times, News Corp, Axel Springer, The New York Times, and The Guardian were among the publishers involved in the talks. The news industry is actively exploring AI’s potential while safeguarding against unauthorized use of their work
Meta plans to release a commercial version of its AI model that can be customized by companies, according to The Financial Times. The move aims to compete with OpenAI and Google in the commercial market for generative AI models. The open-source language model, LLaMa, was previously available only to researchers and academics. A new commercial version of the model will be made more widely available soon, sources told FT. The company intends to offer the AI model for free initially but could explore monetization options in the future. Meta may also charge enterprise customers to allow them to fine-tune the model using their proprietary data, though there are no immediate plans to do so. Meta’s LLMs are open source, so details about the models are made public.
Shutterstock is extending its partnership with OpenAI for six more years, allowing the AI Co to train its models using Shutterstock’s sprawling library of images, videos, music, and metadata during that time. The stock image site’s partnership with OpenAI first began in 2021. That’s when Shutterstock started letting the Co use its images to train its text-to-image model, DALL-E
Microsoft announced a new artificial intelligence subscription service for Microsoft 365: The company will charge users an additional $30 per month for the use of generative AI with tools like Teams, Excel and Word. Adding on the generative AI subscription should reportedly cost enterprise users an additional 50% or more per month. The updates come as the race to offer consumer-driven generative AI tools heats up among tech giants like Microsoft, Google, IBM and more.
No one knows what a head of AI does, but it’s the hottest new job — If AI is coming for our jobs, many Americans are hoping to get out in front of it. Regular people are using AI at work, and tech workers are rebranding themselves as AI experts. And those in leadership are vying for the hottest new job title: head of AI. Outside of tech, the head of AI position was mostly nonexistent a few years ago, but now people are taking on that title — or at least its duties — at everywhere from Amazon to Visa to Coca-Cola. In the US, the number of people in AI leadership roles has grown threefold in the past five years, according to data from LinkedIn, bucking the downward trend in tech hiring overall. And while the head of AI job description varies widely by company, the hope is that those who end up with this new responsibility will do everything from incorporating AI into businesses’ products to getting employees up to speed on how to use AI in their jobs. Companies want the new role to keep them at the forefront of their industries amid AI disruption, or at least keep them from being left behind. “This is the biggest deal of the decade, and it’s ridiculously overhyped,” said Peter Krensky, a director and analyst at Gartner who specializes in AI talent management.
G/O Readies Even More Articles Written by AI — G/O Media plans to create more AI-produced stories soon, according to an internal memo. “It is absolutely a thing we want to do more of.” You’re going to see more AI-written articles whether you like it or not
AJP Partners with OpenAI to Help Local News — OpenAI is committing $5 million to the American Journalism Project to look for ways to support local news through artificial intelligence. It’s part of a larger effort by OpenAI to work with journalism companies as it trains its algorithms and builds its tools. OpenAI is currently in discussions with other major news companies about licensing news content and tech-sharing deals, sources told Axios. AJP will distribute the funding to 10 of its 41 portfolio organizations to experiment with best practices for ways local news outlets can leverage AI responsibly
Raptive’s 4,600 creators represent the largest collective group of content creators in the world. Tech and AI companies are making foundational decisions without sufficiently considering how this will impact them and the millions of content creators who create the content that powers the internet and AI technology. AI companies and creators can work together for the benefit of everyone. But only if creators have a voice in the conversation. Add your name to the open letter at ProtectContentCreators.com to demand big tech and AI companies support creators and the content you love.
Google Tests AI Tool for Writing News Articles — Google is testing a product that uses artificial intelligence to produce news stories, pitching it to major outlets such as the New York Times. The product, pitched as a helpmate for journalists, has been demonstrated for executives at The New York Times, The Washington Post and News Corp, which owns The Wall Street Journal.
The tool, known internally by the working title Genesis, can take in information — details of current events, for example — and generate news content. Google believed it could serve as a kind of personal assistant for journalists, automating some tasks to free up time for others, and that the company saw it as responsible technology that could help steer the publishing industry away from the pitfalls of generative A.I.
Media mogul Barry Diller reiterated that he and other top publishers are prepared to take legal action over use of copyrighted works to train AI systems. Diller, the chair of Dotdash Meredith parent IAC, said on TV this week that tech firms like Google and Microsoft claim the “the fair use doctrine of copyright law allows them to suck up all this stuff,” according to The Hill. He added, “Of course, say we’re open to commercial agreements. But on the side of those people who are depending upon advertising, Google, for instance, they say, ‘Yes, we’ll give you a revenue share. Right now, the revenue share is zero. So, what percent of zero would you like today?
Insights provided by Matthew Scott Goldstein, from his Quarterly newsletter “What I Saw Happen“