Top3: JH & DH & FL & Alg 3-2022:LBH
Deutsche Bank:::hewlett packard :: Softbank:: Samsung:: :siemens:: salesforce ::blackrock   ::snowflake::BMW BNP:: foxconn :: dell::ford ....Coming To King Charles Language Modeling 100 Most Joyful Collabs inspired by Taiwan-Americans & Grace Hopper fans:  J1  Barbados amazon   Anthropic :: databricks
Climate AI::Silico -CLARA=BIOtech:: Genentech::CADENCE:: Hassabis::: CZI-Priscilla Chan ::; BioNeMo :: eg at snowflake::
Who's Human Intelligence Who? 400 : 300 : 200 : 100 YL 2025 celebrates report that stanford and deep mind's human ai valley = United Humans benchmark of worldwide edu livelihood syytems of sdg generation2021 (celebrate 20 years on from 2001 wake upcall to silicon valley when 2001 abed and steves jobs' dream of womens world coop uni zooms womens hi-trust hi-goodness intel everywhere so that all people become lifelong students and mentors2022 worlds deepest health servant leaders reunited real communities simultaneously so what's 2023=24 to linkin first
.welcome - pls click here if you want to start at top of blog of AI and UN goals superstars

Friday, March 31, 2023

Scottish & UK Approaches to HumansAi

 March saw this UK publication

Press release

UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth

Government launches AI white paper to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.

Graphic with text: New world leading approach to AI in the UK
  • White paper sets out new approach to regulating artificial intelligence to build public trust in cutting-edge technologies and make it easier for businesses to innovate, grow and create jobs
  • plan will help unleash the benefits of AI, one of the 5 technologies of tomorrow, which already contributes £3.7 billion to the UK economy
  • follows new expert taskforce to build the UK’s capabilities in foundation models, including large language models like ChatGPT, and £2 million for sandbox trial to help businesses test AI rules before getting to market

Five principles, including safety, transparency and fairness, will guide the use of artificial intelligence in the UK, as part of a new national blueprint for our world class regulators to drive responsible innovation and maintain public trust in this revolutionary technology.

The UK’s AI industry is thriving, employing over 50,000 people and contributing £3.7 billion to the economy last year. Britain is home to twice as many companies providing AI products and services as any other European country and hundreds more are created each year.

AI is already delivering real social and economic benefits for people, from helping doctors to identify diseases faster to helping British farmers use their land more efficiently and sustainably. Adopting artificial intelligence in more sectors could improve productivity and unlock growth, which is why the government is committed to unleashing AI’s potential across the economy.

As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety. There are concerns about the fairness of using AI tools to make decisions which impact people’s lives, such as assessing the worthiness of loan or mortgage applications.

Alongside hundreds of millions of pounds of government investment announced at Budget, the proposals in the AI regulation white paper will help create the right environment for artificial intelligence to flourish safely in the UK.

Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.

The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.

The white paper outlines 5 clear principles that these regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • fairnessAI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

This approach will mean the UK’s rules can adapt as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs, and bold new discoveries that radically improve people’s lives.

Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.

Science, Innovation and Technology Secretary Michelle Donelan said

AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.

Businesses warmly welcomed initial proposals for this proportionate approach during a consultation last year and highlighted the need for more coordination between regulators to ensure the new framework is implemented effectively across the economy. As part of the white paper published today, the government is consulting on new processes to improve coordination between regulators as well as monitor and evaluate the AI framework, making changes to improve the efficacy of the approach if needed.

£2 million will fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.

Organisations and individuals working with AI can share their views on the white paper as part of a new consultation launching today which will inform how the framework is developed in the months ahead.

Lila Ibrahim, Chief Operating Officer and UK AI Council Member, DeepMind, said:

AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly. The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks.

Grazia Vittadini, Chief Technology Officer, Rolls-Royce, said:

Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.

Sue Daley, Director for Tech and Innovation at techUK, said:

techUK welcomes the much-anticipated publication of the UK’s AI white paper and supports its plans for a context-specific, principle-based approach to governing AI that promotes innovation. The government must now prioritise building the necessary regulatory capacity, expertise, and coordination. techUK stands ready to work alongside government and regulators to ensure that the benefits of this powerful technology are felt across both society and the economy.

Clare Barclay, CEO, Microsoft UK, said:

AI is the technology that will define the coming decades with the potential to supercharge economies, create new industries and amplify human ingenuity. If the UK is to succeed and lead in the age of intelligence, then it is critical to create an environment that fosters innovation, whilst ensuring an ethical and responsible approach. We welcome the UK’s commitment to being at the forefront of progress.

Rashik Parmar MBE, chief executive, BCS The Chartered Institute for IT, said:

AI is transforming how we learn, work, manage our health, discover our next binge-watch and even find love. The government’s commitment to helping UK companies become global leaders in AI, while developing within responsible principles, strikes the right regulatory balance. As we watch AI growing up, we welcome the fact that our regulation will be cross-sectoral and more flexible than that proposed in the EU, while seeking to lead on aligning approaches between international partners. It is right that the risk of use is regulated, not the AI technology itself. It’s also positive that the paper aims to create a central function to help monitor developments and identify risks.  Similarly, the proposed multi-regulator sandbox [a safe testing environment] will help break down barriers and remove obstacles. We need to remember this future will be delivered by AI professionals - people - who believe in shared ethical values. Managing the risk of AI and building public trust is most effective when the people creating it work in an accountable and professional culture, rooted in world-leading standards and qualifications.

Notes to editors

Read the AI regulation white paper.

Organisations and individuals involved in

ED: we'll' be adding adam smith scholars moral sentiments update 265 later this year which would also have been centenary of my father and 73rd year of the survey he and von neumann started (hosted in the economist to 1989 and other 2025 report platforms since): what goods will peoples unite where they have first access to 100+ times more tech per decade ?


Opens profile photo
Lila Ibrahim
Google DeepMind COO, former Coursera-KPCB-Intel, founder Team4Tech, Purdue engineer, women in tech, Crown Fellow, WEF YGL
world nomadJoined September 2008

Lila Ibrahim’s posts

Today we've set out a world leading approach to regulating AI. It will: ✅help businesses innovate, grow & create jobs ✅keep people safe & build public trust I sat down with @DeepMind to talk about the exciting future of AI in the UKπŸ‘‡
The phenomenal teams from Google Research’s Brain and @DeepMind have made many of the seminal research advances that underpin modern AI, from Deep RL to Transformers. Now we’re joining forces as a single unit, Google DeepMind, which I’m thrilled to lead!
This is exciting news: @DeepMind & Brain team from Research join forces as Google DeepMind. #AI will help communities to achieve amazing breakthroughs - bringing these teams together gives us the best talent&resources to help address the biggest challenges facing humanity
Google DeepMind
We’re proud to announce that DeepMind and the Brain team from @Google Research will become a new unit: π—šπ—Όπ—Όπ—΄π—Ήπ—² 𝗗𝗲𝗲𝗽𝗠𝗢𝗻𝗱. Together, we'll accelerate progress towards a world where AI can help solve the biggest challenges facing humanity. →
Proud of DeepMind’s cover in with #AlphaTensor, which discovers more efficient ways to tackle a mathematical task that’s ubiquitous to modern computing. We hope this work helps address a fundamental computer science problem & advances a new era of algorithmic discovery.
Google DeepMind
Today in @Nature: #AlphaTensor, an AI system for discovering novel, efficient, and exact algorithms for matrix multiplication - a building block of modern computations. AlphaTensor finds faster algorithms for many matrix sizes: & 1/
0:02 / 0:08