Free webinars from Cloudflare and TollBit: AI monitoring and monetisation

The rise of generative AI has created serious copyright concerns, but also opportunities for savvy publishers and creators. Two upcoming webinars, open to European and national publishers’ associations and their members, can help you get up to speed.

TollBit has developed an AI monitoring and monetisation platform already trusted by over 2,400 leading publisher websites. In their upcoming webinar, they’ll offer a sneak peek at their upcoming ‘State of the Bots Report’ and discuss their latest tools for monitoring, managing and monetising AI crawler activity. Join here on 10 September (passcode 306368).

Cloudflare is developing a new suite of AI auditing tools designed to empower publishers to choose what, if any, AI access they want to allow. In their webinar, they’ll share how publishers can now monitor attention from AI crawlers, block unwanted data scraping and even charge AI crawlers for access to new content. Join here on 24 September (passcode 972257).

Administration continues to focus on, restrict international collaboration

Following on July’s USDA memo announcing that it will ‘place America First in provisioning all USDA funds’, the NIH released a statement on ‘Maximising and Safeguarding NIH’s Investment in Foreign Collaborations’. While the text encouragingly notes that ‘the agency conducts and supports international research’ to advance its mission, its subtext is that any support going to foreign entities or activities will receive extra scrutiny.

Political interference in grants likely to stay, increase

The Trump administration released an executive order last month giving political appointees the responsibility for reviewing and approving all grant decisionsa radical change from prior merit review processes. This builds on the politically motivated cancellation of grants at NIH, NSF and other federal agencies earlier this year and the attacks by the administration on American universities.

This movement away from pure peer review of grants is likely to hold, given recent federal court decisions allowing the administration’s cuts from earlier this year to go forward (e.g., here and here). While final decisions in most of these cases have not yet been made, it does not appear that the courts will make a significant stand against political decision-making for grants.

Agencies start to release ‘Gold Standard Science’ approaches

Federal agencies, including DOE, NIH and NSF, have released plans to implement President Trump’s executive order promoting ‘Gold Standard Science’, pursuant to the White House Office of Science and Technology Policy (OSTP) guidance for agencies and request for plans by 22 August. Other agencies may have issued responses by the time this newsletter goes to press, or may subsequently.

Additionally, the EO directed agencies to revert to their pre-Biden scientific integrity policies, and the EPA officially reinstated its 2012 scientific integrity policy in accordance with the order.

NIH publishing expense cap comments open through 15 September

As noted in last month’s newsletter, NIH has formally proposed a cap on the use of NIH grant funds for publication costs in a blog post and an associated request for information. STM CEO Caroline Sutton met with the NIH Office of Science Policy in August and was told that they are very open to the comments, and want to consider potential unintended consequences of the proposed options as well as alternatives. STM continues to strongly encourage all members to submit comments through the 15 September deadline.

Joint Statement on EU AI Act Implementation Measures

STM joins a global coalition of authors and rightsholders to express disappointment in the lack of ambition shown in the European Commission’s AI Act implementation package — including the Code of Practice, the Guidelines, and the Transparency Template. These instruments risk setting the wrong direction toward an implementation of the AI Act that is not meaningful.

We call for resolute and effective enforcement of the Act — to ensure legal clarity and to strengthen science, culture, and innovation across the EU in line with the principles of responsible and ethical AI.

📄 Read the full joint statement
🔗 For context, see our joint campaign: Stay True to the (AI) Act

STM and ISTIC launch joint Open Research Fund

STM and ISTIC Launch Joint Open Research Fund to Advance Scholarly Integrity and Innovation

On September 26, 2024, the International Association of Scientific, Technical and Medical Publishers (STM) and the Institute of Scientific and Technical Information of China (ISTIC) signed a Memorandum of Understanding (MOU) in Beijing, marking a significant step toward deepening collaboration between China and the international research community.

Building on this foundation, STM and ISTIC have launched the ISTIC–STM Open Research Fund 2025, unveiled through newly released application guidelines in July 2025. The fund invites proposals from China’s academic publishing community and researchers, supporting studies that will help shape a more open, credible, and sustainable global research ecosystem.

The 2025 call for proposals focuses on three urgent and strategic themes:

  1. Research on Copyright Issues under Artificial Intelligence
  2. Research on How Academic Publishers Can Maintain Credible Content: Challenges and Strategies
  3. Research on How Scientific Journals Can Fulfill Their Social Responsibilities

This initiative reflects STM’s and ISTIC’s shared commitment to fostering innovation, upholding research integrity, and contributing diverse perspectives to the global dialogue on scholarly communication.

[Image above: Members of the ISTIC–STM Working Group gathered in Beijing with STM CEO, Caroline Sutton, June 20, 2025]

A United Call to Protect the Future of Research

STM has joined the Association of College and Research Libraries, the Association of Research Libraries, the Association of University Presses, and the Society for Scholarly Publishing in a unified call to safeguard the future of research.

Published in The Scholarly Kitchen, our joint op-ed responds to escalating uncertainty in the United States, including sweeping federal funding cuts and reductions in the federal research workforce, with a shared statement of values and intent.

While this statement centers on the U.S. context, the values at its core—trust, continuity, and scholarly independence—are essential across the global research ecosystem.

Though our sectors bring different perspectives, we are aligned in our belief that the infrastructure supporting discovery, scholarship, and access to knowledge must be protected.

This piece marks a first step—and signals a broader commitment to collaboration and action.

Read the op-ed: Trust and Integrity: A Research Imperative

White House issues guidance on controversial ‘Gold Standard Science’

Following its announcement last month that the US National Institutes of Health would implement a cap on allowable publication costs and associated comments that OSTP Director Dr. Jay Bhattacharya made in interviews (here and here), NIH has announced five potential options for limiting the amount of taxpayer money that goes to support publishing:

  1. Disallow all publication costs.
  2. Set a limit on allowable costs per publication (e.g., $2000).
  3. Option 2, but allow higher amounts when peer reviewers are compensated (e.g., $3000).
  4. Set a limit on the total amount of an award that can be spent on publication costs (e.g., 0.8% of a grant).
  5. Both option 2 and 4.

Their approach is outlined in a blog post and in the request for information, which is open for comment through 15 September. The RFI outlines five specific questions, including a catch-all ‘other’ option, but also allows responses with attachments.

STM will be submitting a response and strongly encourages all our members to do so as well.

EU Commission seeks experts for scientific panel to advise on AI

The EU Commission is seeking 60 independent experts to support the implementation and enforcement of the AI Act. The chosen experts will serve on a scientific panel and advise the EU AI office on general-purpose AI, including evaluation methodologies, cross-border market surveillance and emerging risks.

All candidates must hold a PhD or equivalent, be free of any financial ties to AI providers and have expertise in AI or another relevant field, such as cybersecurity. Applications are open through 14 September.

To learn more or put your name forward for consideration, click here.