1. Mapping The Technology Innovation Cycle I Worked With
1.1. Idea for a potentially useful technology innovation for UN. Gradually evolves. Initial relationships are formed within or between innovators and users
1.1.1. Problem & solution identified, with potential application of a new tool, approach, or idea
1.1.2. Idea becomes a concrete concept/proposal
1.1.3. Create initial relationships between innovators and users
1.2. Determine feasibility of idea through a proof of concept (POC)
1.2.1. Define in detail the specific problem and proposed innovation specific issue “and use case(s),” working with stakeholders and end-users. Key metrics and criteria developed to measure effectiveness of innovation
1.2.2. Develop POC to determine the feasibility of the innovation against the evaluation criteria
1.3. Prototype built & tested, based on the POC. Focus on validation of the strategic design
1.4. Pilot Project (one or more) to demonstrate operational use of the innovation under realistic conditions
1.5. Finalize product/innovation for actual use Roll-out to one or more selected clients Use and update metrics for performance measurements Provide first drafts of SOPs, CONOPs, training materials
1.6. Ongoing lessons learned; life cycle assessment and costing Continuing assessment of practice, including end-user feedback, etc.
1.7. Innovation made available as a standard item/procedure (e.g., in service catalogue/portfolio), with appropriate maintenance & training Cycle begins anew with improvements made
2. State‑led initiatives embedded in the existing architecture
3. Recommendation 3: AI standards exchange= Creation of an AI standards exchange, bringing together representatives from national and international standard-development organizations, technology companies, civil society and representatives from the international scientific panel
3.1. Developing and maintaining a register of definitions and applicable standards for measuring and evaluating AI systems;
4. Recommendation 2: Policy dialogue on AI governance = Launch of a twice-yearly intergovernmental and multi-stakeholder policy dialogue on AI governance on the margins of existing meetings at the United Nations
5. Recommendation 1: An international scientific panel on AI
5.1. Creation of an independent international scientific panel on AI, made up of diverse multidisciplinary experts in the field serving in their personal capacity on a voluntary basis. Supported by the proposed United Nations AI office and other relevant United Nations agencies, partnering with other relevant international organizations
5.1.1. a) Issuing an annual report surveying AI-related capabilities, opportunities, risks and uncertainties, identifying areas of scientific consensus on technology trends and areas where additional research is needed;
5.1.2. b) Producing quarterly thematic research digests on areas in which AI could help to achieve the SDGs, focusing on areas of public interest which may be under-served;
5.1.3. c) Issuing ad hoc reports on emerging issues, in particular the emergence of new risks or significant gaps in the governance landscape.
6. Categorizing AI-related risks based on existing or potential vulnerability
6.1. Individuals:
6.1.1. Human dignity, value or agency (e.g. manipulation, deception, nudging, sentencing, exploitation, discrimination, equal treatment, prosecution, surveillance, loss of human autonomy and AI-assisted targeting).
6.1.2. Physical and mental integrity, health, safety and security (e.g. nudging, loneliness and isolation, neurotechnology, lethal autonomous weapons, autonomous cars, medical diagnostics, access to health care, and interaction with chemical, biological, radiological and nuclear systems).
6.1.3. Life opportunities (e.g. education, jobs and housing).
6.1.4. (Other) human rights and civil liberties, such as the rights to presumption of innocence (e.g. predictive policing), the right to a fair trial (e.g. recidivism prediction, culpability, recidivism, prediction and autonomous trials), freedom of expression and information (e.g. nudging, personalized information, info bubbles), privacy (e.g. facial recognition technology), and freedom of assembly and movement (e.g. tracking technology in public spaces).
6.2. Politics and society:
6.2.1. Discrimination and unfair treatment of groups, including based on individual or group traits, such as gender, group isolation and marginalization.
6.2.2. Differential impact on children, older persons, persons with disabilities and vulnerable groups.
6.2.3. International and national security (e.g. autonomous weapons, policing and border control vis-à-vis migrants and refugees, organized crime, terrorism and conflict proliferation and escalation).
6.2.4. Democracy (e.g. elections and trust).
6.2.5. Information integrity (e.g. misinformation or disinformation, deepfakes and personalized news).
6.2.6. Rule of law (e.g. functioning of and trust in institutions, law enforcement and the judiciary).
6.2.7. Cultural diversity and shifts in human relationships (e.g. homogeneity and fake friends).
6.2.8. Social cohesion (e.g. filter bubbles, declining trust in institutions, and information sources).
6.2.9. Values and norms (e.g. ethical, moral, cultural and legal).
6.3. Economy:
6.4. Power concentration
6.4.1. Technological dependency.
6.4.2. Unequal economic opportunities, market access, resource distribution and allocation.
6.4.3. Underuse of AI.
6.4.4. Overuse of AI or “technosolutionism”.
6.4.5. Stability of financial systems, critical infrastructure and institutions.
6.4.6. Intellectual property protection.