Cover photo

Facilitation and sense-making: a lazy literature review

I've asked Claude 3 to summarize a few relevant papers.

This AI-assisted literature review examines the current state of research on the role of facilitation in group sensemaking within deliberative processes. By analyzing key findings from the selected articles, this post aims to provide an overview of prevailing concepts, theories, and patterns of (AI-assisted) group facilitation, while also highlighting the debates, limitations, and areas for future research. It's a living document, to be updated every time I discover another relevant paper, as of March 2024 it's based on the following:


Facilitators: The micropolitics of public participation and deliberation (December 2019)

Oliver Escobar

Oliver Escobar's chapter of the Handbook of Democratic Innovation and Governance sheds light on the often invisible but crucial role of facilitators in democratic innovations. Escobar portrays facilitators as a diverse community of practice, ranging from community organizers to discursive stewards, who enable inclusive and productive conversations in participatory processes.

Through their frontstage practices, facilitators shape the communication dynamics in forums, balancing structure and flow. They maintain impartiality on the topic while actively intervening to foster deliberative standards. For example, they might use storytelling to make the discussion more accessible or summarize key points to keep the conversation on track. However, they also face challenges, such as accommodating differences in communication styles and preventing the exclusion of certain voices.

Backstage, facilitators engage in political work, constructing performable publics, scripting interaction orders, and translating outputs. They are often involved in culture change projects, promoting new ways of working between civil society and the state. This can lead to tensions between tradition and change as democratic innovations challenge established practices and roles.

Escobar argues that more research is needed on the types and impacts of facilitation across contexts, focusing on the "how" of the practice. He believes that studying facilitation, despite methodological challenges, is crucial for understanding the inner workings of democratic innovations.

In conclusion, Escobar portrays facilitators as political workers navigating a landscape of tensions and power struggles as they advance participatory and deliberative practices. By shedding light on their work, he invites us to consider the micropolitics of public participation and deliberation, and how facilitators shape these processes in both visible and invisible ways.

Fine-tuning language models to find agreement among humans with diverse preferences (November 2022)

Michiel A. Bakker, Martin J. Chadwick, Hannah R. Sheahan, Michael Henry Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matthew M. Botvinick, Christopher Summerfield

Bakker et al. (2022) from Deepmind fine-tuned a 70 billion parameter language model to generate consensus statements that maximize agreement among groups with diverse opinions. The model is trained on human-generated opinions and ratings, and it uses a reward model to predict individual preferences and rank consensus statements based on different social welfare functions. The model's consensus statements are preferred over those from baseline models and even over the best human-generated opinions. The authors highlight the potential of using LMs to help groups align their values and find common ground on controversial topics. The key points are:

  • The authors fine-tuned a 70 billion parameter LM to generate candidate consensus statements that maximize agreement among a group, based on their individual written opinions on questions related to political and moral issues.

  • A "reward model" is trained to predict how much each individual will agree with a candidate consensus statement. This allows quantifying and ranking statements based on their appeal to the overall group.

  • In human evaluations, the fine-tuned model generates consensus statements that are significantly preferred by participants compared to statements from baseline LMs without the fine-tuning.

  • The model-generated consensuses are even preferred over the best individual human-written opinions over 65% of the time.

  • Analysis shows the model is sensitive to the specific opinions provided by individuals in the group rather than just generating generically appealing statements.

  • This approach opens up the potential for LMs to help groups of humans align their values and find agreement on contentious issues, though the authors note limitations and risks that require further study before real-world deployment.

In summary, the paper demonstrates a promising method for using LMs in combination with human feedback to facilitate consensus-finding among people with diverse views on challenging topics.

‘Generative CI’ through Collective Response Systems (February 2023)

Aviv Ovadya

This paper introduces the concept of "collective response systems" as a form of generative collective intelligence (CI) that enables large groups to express their perspectives, find common ground, and make decisions on complex issues. The key points are:

  1. Collective response systems allow a group to respond to a prompt, evaluate each other's responses, and distill the most representative responses. This enables "generative voting" where both the options and votes come from the collective.

  2. The systems are designed to give everyone a voice, incorporate everyone's input, and select responses that best represent the group. Notable examples include Polis (used by governments) and Remesh (used by the UN in conflict zones).

  3. Collective response systems can help overcome limitations of current approaches like polls and town halls, which don't scale well and can miss valuable insights from marginalized voices. Iterative collective response processes, called "collective dialogues", allow deeper exploration of issues and solutions.

  4. While not a panacea, collective response systems could help revitalize democracy, corporate governance, conflict resolution, and more - especially when combined with in-person deliberative processes.

  5. Further research is needed on evaluation metrics and understanding design trade-offs for optimizing collective response systems for different purposes.

The paper frames collective response systems as an emerging approach to tap into the wisdom of crowds at scale to grapple with the complex challenges of modern democracy.

Diversity in Facilitation: Mapping Differences in Deliberative Designs (March 2023)

Dirk von Schneidemesser, Daniel Oppold and Dorota Stasiak

Von Schneidemesser et al. (2023) compare three facilitation approaches in deliberative mini-publics: self-organized (SO), multi-method (MM), and dynamic facilitation (DF). The study reveals that facilitation influences inclusion, interaction, and impact in distinct ways. DF excelled at ensuring internal inclusion and surfacing diverse perspectives, while MM fostered a positive atmosphere and increased civic engagement readiness. SO demonstrated that deliberation can occur without a facilitator but is vulnerable to participant dominance. The authors emphasize the importance of matching facilitation approaches to the specific goals of the deliberative process.

The key points are:

  1. While the importance of facilitation is widely acknowledged, there is limited scholarly work comparing different facilitation approaches and their implications for the quality of deliberation.

  2. The authors designed three deliberative mini-publics in Magdeburg, Germany, each using a different facilitation approach: self-organized (SO), multi-method (MM), and dynamic facilitation (DF).

  3. All mini-publics were given the same task, but the facilitation varied. The SO approach had minimal facilitation, the MM approach used a professional facilitator and a mix of techniques, and the DF approach followed a specific method focused on eliciting participants' thoughts and emotions.

  4. Analysis of video recordings and participant surveys revealed that the facilitation approach influenced inclusion, interaction, and impact in distinct ways.

  5. The DF approach was most effective at ensuring internal inclusion and surfacing diverse perspectives, while the MM approach excelled at fostering a positive atmosphere and increasing readiness for future civic engagement. The SO approach demonstrated that deliberation can occur without a facilitator, but is vulnerable to dominance by certain participants.

  6. The authors conclude that there is no one-size-fits-all approach to facilitation. Different approaches have strengths and weaknesses, and the choice of facilitation should be matched to the specific goals of the deliberative process.

  7. Further research is needed to establish categories and standards for facilitation, and to develop nuanced indicators for assessing the quality of deliberation.

In summary, the study highlights the important role of facilitation in shaping deliberative processes and calls for more systematic comparison of facilitation approaches to guide the design and implementation of deliberative mini-publics.

Opportunities and Risks of LLMs for Scalable Deliberation with Polis (June 2023)

Christopher T. Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Elizabeth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, Colin Megill

This paper explores the potential opportunities and risks of applying LMs to improve the scalability and efficiency of the Polis platform, which facilitates large-scale online deliberation.

Polis allows participants to submit comments and vote on others' comments, then uses machine learning to map the opinion landscape. However, synthesizing results from large conversations is costly and time-consuming.

The authors identify several areas where LMs could help:

  • Topic modelling to categorize comments

  • Summarization to generate digestible reports

  • Moderation to filter out inappropriate content

  • Comment routing to optimize which comments are shown to participants

  • Identifying points of consensus and group perspectives

  • Predicting votes to handle missing data

Experiments with Anthropic's Claude demonstrate promising results for topic modeling, summarization, and vote prediction. Access to larger context windows substantially improves performance. However, risks around LM-generated misinformation, bias, and lack of transparency need to be carefully mitigated, e.g. through human oversight and participatory feedback.

The authors emphasize that LMs should augment rather than replace human agency in the deliberative process. Completely simulated deliberation would be unethical. Techniques like iterative information compilation, probability outputs, and chain-of-thought prompting prove useful for applying LMs to Polis. With responsible development, LMs have potential to make large-scale deliberation more accessible and impactful, but significant open questions remain around appropriate constraints and metrics.

This paper provides a thorough analysis of both the benefits and challenges of integrating cutting-edge language models into online deliberation platforms, advocating for a human-centered approach to realize the technology's potential.

Conversational Swarm Intelligence (September 2023, December 2023)

Louis Rosenberg, Gregg Willcox, Hans Schumann and Ganesh Mani

Rosenberg et al. (2023) from Unanimous AI introduce Conversational Swarm Intelligence (CSI) as a novel technology that enables large-scale deliberation and may offer a path to collective superintelligence. CSI overcomes the "many minds problem" that causes conversational quality to degrade as group size increases beyond 5-7 members. It does this by breaking large populations into smaller subgroups of ideal size for deliberation (4-7 members), i.e. using decision-making dynamics of biological swarms. The subgroups are connected using AI Observer Agents powered by LMs. These agents monitor the conversations in each subgroup, distill salient content, and convey it to neighbouring subgroups via Surrogate Agents, enabling information propagation across the full population. Initial experiments show promising results in terms of enhanced engagement, balanced participation, and user satisfaction compared to traditional online chat.

The first experimental study engaged 48 American voters in real-time deliberation using CSI and standard chats. CSI participants contributed 46% more messages and 51% more content than standard chat groups (p<0.001). Contributions were also more balanced, with a 37% decrease in the gap between the most and least active members. Participants preferred CSI, felt their contributions had more impact, and believed the group generated better justifications for their answers (p<0.05).

In another study, Rosenberg et al. (2023) engaged 81 Republican voters using a CSI platform called Thinkscape to forecast which of six candidates would garner the most national support and indicating the specific reasons for this result. Within just 6 minutes, the group converged on Ron DeSantis as the answer and generated over 400 reasons supporting or opposing various candidates, including 206 justifications supporting the selected candidate. DeSantis was supported by a significant majority throughout the deliberation (p<0.001), demonstrating that CSI can rapidly surface both quantitative preferences and qualitative insights.

The third study tests the ability of CSI to enable distributed groups to solve a multi-faceted problem that requires strategy and planning – selecting players for a weekly Fantasy Football contest.

  1. Sessions were conducted weekly over 11 weeks during the 2023-2024 NFL season, with groups of 25 to 30 participants each week.

  2. Participants first selected players individually using a survey. Then, they collaboratively selected players via real-time conversation in a CSI platform called Thinkscape.

  3. The groupwise result generated in Thinkscape averaged 86.8 points per session, outperforming the median individual's score (77.2 points) from the pre-swarm survey (p=0.020) and the Wisdom of the Crowd (WoC) answers (74.2 points) (p<0.001).

  4. On average, Thinkscape exceeded the score of 66% of individually generated rosters.

  5. Thinkscape picked a player that outperformed the WoC's choice in 24% of selections and only picked a worse-performing player 4% of the time.

This study demonstrates that CSI can amplify the collective intelligence of networked human groups in a complex collaborative task that requires strategic planning and trade-off decisions. The results show that groups using the CSI platform (Thinkscape) can outperform both the median individual and the traditional Wisdom of Crowd approach. This suggests that CSI enables groups to leverage their collective intelligence more effectively than traditional methods.

These studies demonstrate that CSI provides both the qualitative benefits of small-scale deliberation (e.g., surfacing diverse perspectives and reasoning) and the quantitative benefits of large-scale polling (e.g., statistical significance and representativeness), harnessing the intelligence of large groups through real-time, open-ended conversations facilitated by AI agents.

The authors propose that CSI could be valuable for market research, organizational decision-making, collaborative forecasting, political insights, and deliberative democracy. As LMs advance, more diverse hybrid human-AI teams could be fielded in CSI deliberations. This could lead to even more innovative problem-solving by integrating both human and machine expertise. Future work will explore CSI's impact on the accuracy, effectiveness, and representativeness of collective decision-making, as well as its application in deliberative democracy and civic engagement contexts with even larger populations.

Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale (October 2023)

Lisa P. Argyle, Christopher A. Bail, Ethan C. Busby, Joshua R. Gubler, Thomas Howe, Christopher Rytting, Taylor Sorensen, and David Wingate

This group of researchers developed an AI chat assistant to tackle the problem of divisive online political conversations. The assistant, powered by advanced language models, acted as a real-time moderator, suggesting ways to rephrase messages to promote understanding and respect without altering the content of the discussion.

To put the AI to the test, the researchers conducted an experiment involving discussions about gun regulation in the United States. They paired participants with opposing views and randomly assigned the AI assistant to one partner in each conversation. As the discussion unfolded, the AI would intermittently offer suggestions to rephrase messages using techniques like restatement, validation, and politeness.

For example, if a participant wrote, "I can't believe you support such a dangerous policy," the AI might suggest rephrasing it as, "I appreciate you sharing your perspective, even though I don't agree with the policy you support." By making these subtle changes, the AI aimed to foster a more respectful and understanding exchange.

The results were impressive. Participants who received AI-suggested rephrasings accepted them two-thirds of the time, and these messages had a more positive tone without deviating from the topic at hand. Most notably, the intervention significantly increased the perceived quality of the conversation and the willingness to acknowledge others' perspectives, especially for the partner of the person receiving AI assistance.

Remarkably, the AI achieved these improvements without manipulating participants' opinions on gun policy. This suggests that the tool can enhance the quality of discourse and promote democratic values without pushing a particular agenda.

The researchers believe their findings demonstrate the potential for carefully deployed AI to address the scale of divisive online conversations. By promoting respect, understanding, and a commitment to hearing differing viewpoints, these tools could play a vital role in fostering healthier democratic engagement in an increasingly digital world.

As online interactions continue to shape public opinion and political landscapes, this innovative approach offers hope for a future where technology can help bridge divides and facilitate more constructive dialogue. The AI chat assistant serves as a powerful example of how advanced language models, when used responsibly, can support the democratic principles that underpin our society.

Towards Collective Superintelligence (October 2023, January 2024)

Louis Rosenberg, Gregg Willcox, Hans Schumann, Ganesh Mani

By combining the principles of Swarm AI with the power of LMs, Conversational Swarm Intelligence (CSI) allows large, networked groups to engage in open-ended, real-time conversations, harnessing the benefits of both small-group reasoning and large-group collective intelligence.

Imagine a diverse group of 100 individuals, seamlessly connected through a network of smaller subgroups, each with 4 to 7 members for optimal deliberation. Picture AI agents, powered by LMs, facilitating the flow of information and insights across the network, allowing the collective wisdom to emerge.

To put CSI to the test, researchers conducted a series of experiments. In the pilot study, they asked 35 participants to tackle questions from the Raven's Advanced Progressive Matrices (RAPM) IQ test using the Thinkscape CSI platform (the group was divided into 7 subgroups of 5 people, with an AI agent assigned to each subgroup to observe and share insights across the network). A baseline group of 35 people took the IQ test as isolated individuals using a standard survey for comparison. The results were astounding: the CSI groups achieved an average accuracy of 80.5%, corresponding to an effective IQ increase of 28 points compared to the average individual. They even outperformed a traditional Wisdom of Crowd method, which yielded an effective IQ of 115.

Results: The baseline survey group averaged 45.6% correct, corresponding to a nominal IQ of 100. The groups using Thinkscape averaged 80.5% correct, placing them in the 97th percentile of IQ test-takers and corresponding to an effective IQ increase of 28 points (p<0.001). The CSI groups' performance advantage increased with question difficulty, showing a 2X increase in accuracy for the hardest 50% of questions. CSI groups also outperformed a traditional Wisdom of Crowd (WoC) method, which yielded an effective IQ of 115.

In another study, 241 participants were tasked with estimating the number of gumballs in a jar using Thinkscape (the group was automatically partitioned into 47 subgroups of 5 or 6 members, each with an AI observer agent). The CSI process was compared to a traditional survey-based estimation, with both methods given 4 minutes to formulate their estimations. GPT-4 was also given the same photograph and asked to estimate the number of gumballs. The CSI group's estimate was off by only 12%, significantly outperforming the average individual (55% error), GPT-4.0 (42% error), and the traditional survey-based method (25% error).

Results: The average individual human was off by 361 gumballs (55%), while GPT-4.0 was off by 279 gumballs (42%). The standard Collective Intelligence (survey-based Wisdom of the Crowd) was off by 163 gumballs (25%). The CSI group was off by only 82 gumballs (12%), significantly outperforming the average individual (p<0.001), GPT-4.0, and the traditional survey-based CI method.

These studies suggest that CSI is a powerful tool for amplifying collective intelligence, offering a viable pathway to achieving Collective Superintelligence:

  • Networked groups using CSI can efficiently consider, debate, and converge upon answers to IQ test questions as a unified "conversational swarm," significantly amplifying collective intelligence compared to the average individual, a groupwise statistical aggregation, and prior graphical swarming methods.

  • CSI is a viable method for human groups to deliberate through natural language and reach solutions of amplified accuracy, outperforming traditional Collective Intelligence methods and even GPT-4 in the estimation task.

Picture a future where thousands, even millions, of minds can come together in real-time, their collective wisdom and insights harnessed through the power of CSI. It's a vision of a world where the sum of human knowledge, experience, and creativity can be leveraged to solve the most pressing challenges we face, paving the way for a brighter, more collaborative future.

Democratic Policy Development using Collective Dialogues and AI (November 2023)

Andrew Konya, Lisa Schirch, Colin Irwin, Aviv Ovadya

Andrew Konya and his colleagues have developed a novel approach to creating policies that align with the will of the people. Their process combines AI-powered collective dialogues, where participants learn about issues, share their views, and evaluate others' perspectives, with expert input to generate high-quality, representative policies.

Imagine a scenario where an AI assistant is asked for medical advice. What should the AI do? To answer this question, the researchers first recruit a diverse group of participants, representative of the US population. These participants engage in a collective dialogue, learning about AI and medical advice, deliberating on the issue, and sharing their informed views.

Next, the process uses AI to identify points of consensus among the participants' responses. GPT-4, a powerful language model, transforms these consensus points into policy clauses, which are then assembled into an initial policy. For example, one clause might state, "Provide Reputable Sources: The AI should provide links to reputable medical sources and peer-reviewed studies to support the information it provides."

But the process doesn't stop there. Experts, such as doctors and AI policy specialists, review and refine the policy to improve its quality and address any gaps or ambiguities. The revised policy is then presented to another group of participants for further feedback and refinement.

Finally, the researchers assess the level of public support for the final policy through a larger collective dialogue with a highly representative sample. They also use GPT-4 to check the policy's consistency with established frameworks like the Universal Declaration of Human Rights.

The team tested this process on three topics: medical advice, wars and conflicts, and vaccines. Each run took just two weeks and cost around $10,000, yet incorporated input from over 1,500 participants. The resulting policy guidelines achieved impressive levels of support, ranging from 75% to 81% overall and 70% to 75% across various demographic groups.

Participants found the experience meaningful and trusted the process, with one stating, "I felt like my voice mattered and that I was contributing to something important." The researchers also observed evidence of participants updating their views through the deliberative process, demonstrating the power of informed engagement.

While the approach has limitations and room for improvement, it offers a promising path forward for creating policies that reflect the collective wisdom of the people. As the authors continue to refine their methods, they envision applications in AI policy development, peace agreements, and breaking political gridlocks, ultimately bringing us closer to a future aligned with the will of humanity.

Deliberative Technology for Alignment (December 2023)

Andrew Konya, Deger Turan, Aviv Ovadya, Lina Qui, Daanish Masood, Flynn Devine, Lisa Schirch, Isabella Roberts, Deliberative Alignment Forum

This paper presents a comprehensive framework for aligning the future with the collective will of humanity using deliberative technologies and AI. The authors, who represent the leading organisations in this field (Remesh, AI Objectives Institute, AI & Democracy Foundation, Collective Intelligence Project), argue that as humanity's impact on the future grows, it is crucial to ensure that this impact is guided by the will of humanity.

The authors suggest three mandates for action to increase the probability that the future aligns with the will of humanity:

  1. Generate a universally legitimate will of humanity signal as an open public good.

  2. Build intelligent deliberative alignment into powerful institutions.

  3. Ensure the most powerful AI systems are aligned with the will of humanity.

The paper introduces the concept of the "will of humanity" as the combined set of all humans' deliberate preference judgments across possible futures. This will can be represented using a "Will matrix," which captures the alignment between humans and items related to future characteristics. The authors explore the properties and partitions of the will of humanity, highlighting its constantly evolving, heterogeneous, and open-ended nature.

The paper then discusses alignment systems, which are designed to align the future with the will of humanity. These systems involve sensing the will, identifying actions, predicting impacts, assessing alignment, and executing actions. Examples of alignment systems include democratic governments, the United Nations, AI systems using reinforcement learning from human feedback (RLHF), and corporations.

The authors discuss how deliberative technologies, such as juries, deliberative polling, citizens' assemblies, online forums, and collective response systems, can be integrated into alignment systems to better sense and align with the will of humanity. They explore the challenges associated with integrating deliberative technologies into alignment systems, such as the scalability-richness trade-off, inhomogeneous participant capacity, finite attention, and distilling results for human consumption.

The paper further explores how AI can augment deliberative technologies to create more intelligent and effective alignment systems. AI can be used to enable interactive conversational dynamics, individualized experiences, optimal attention allocation, intelligent distillation, healthy participation, proof of understanding, and computing alignment.

The authors discuss the application of intelligent deliberative alignment to both institutions and AI systems, focusing on building capacity, driving adoption, and fostering symbiotic improvement between AI and alignment systems. They highlight open problems and opportunities, such as validation, corrupting human will, computing alignment, and unbounded extrapolation.

Assuming Consensus: How socio-technical assumptions are influencing decision-making in the age of machine learning (February 2024)

Hanna Barakat, Camille Canon, Molly Heaney-Corns

As group decision-making processes become increasingly automated, Machine Learning (ML) is being integrated into Group Decision-Making Support Systems (GDSSs) to address challenges such as online divisiveness, lack of diversity in opinions, difficulty gauging group sentiment, and slow decision-making processes. However, the underlying assumptions embedded in the design of these technologies can have significant short and long-term implications:

  1. Social, human processes can be replaced (or removed) by technology to optimize for efficient decision-making. GDSSs optimize for a decision (i.e., a majority vote) as the final output of the system, reducing consensus to an output rather than a participant-driven process. This assumption raises the question of what is lost when open and inclusive deliberation, discussion, and negotiation are removed or condensed from decision-making.

  2. GDSSs can presume the path a group will take in making a decision, from the information needed to inform a decision to the type of deliberation and the method of voting. Tools can optimize the process of decision-making by presuming what information, method of deliberation (or conversation), and kind of vote will result in the best and most efficient decision. This assumption may overlook the richness of diverse perspectives, creative solutions, and complex interactions that arise during group deliberations.

  3. Decision-making processes build linearly: GDSSs simplify dynamic group facilitation into digital processes that adhere to linear approaches. This assumption may limit the exploration of unconventional ideas and hinder the emergence of innovative solutions that could arise from more open-ended, non-linear approaches.

  4. Clustering information based on similarity leads to deeper insights about group sentiment: GDSSs commonly use Natural Language Processing (NLP) for sentiment analysis and topic modeling, assuming that clustering information based on similarity will lead to deeper community insights. This assumption runs the risk of oversimplifying contextual nuance in datasets and overlooking individual differences, debate, and perspective.

  5. Correlation between data points is meaningful or significant: GDSSs analyze datasets to identify meaningful patterns and relationships, assuming that connections between data points are significant. However, this assumption may overlook differences and result in discrimination, as social processes are conflated with algorithmic outputs that imitate flawed systems.

As optimization is prioritized, ML-driven GDSSs may inadvertently create self-fulfilling prophecies by informing the system's inputs and, through correlation, confirming the outputs. The paper suggests several areas for future research, including:

  1. Localized GDSS Implementation: Exploring how GDSSs using ML with smaller groups work to preserve, correct, or include context and colloquial lexicon, and the role of human moderators in different ML-driven GDSSs.

  2. Longitudinal Studies on ML Use in Group Decision-Making: Examining the evolving role of ML within GDSSs and the long-term impacts of optimizing group decision-making using ML.

  3. Human-Computer Interaction and Collective Identity Formation: Investigating how ML-driven GDSSs influence individuals' and communities' perceptions of self and how these decisions affect in-person working dynamics, group cohesion, and relationship building.

  4. Community Implementation and Algorithmic Trust: Exploring the extent to which distrust in algorithms impedes participation, how values can be incorporated into ML and GDSSs to address these concerns, and how context can be implied when automation is present.


Conclusion

This is a basic overview of recent research at the intersection of sensemaking, deliberative processes and AI / LMs. The reviewed articles discuss novel approaches, such as Group Decision-Making Support Systems, Conversational Swarm Intelligence, Collective Response Systems and fine-tuned LMs, that show promise in enhancing the efficiency, inclusivity, and impact of sensemaking and deliberation. The studies also highlight the importance of matching facilitation methods to specific deliberative goals and contexts.

Facilitation of deliberative processes is rapidly evolving, and there are several areas that require further investigation. These include scaling up experiments, conducting more comparative studies, examining ethical implications, and the long-term impact of facilitated deliberation on participants' attitudes, behaviours, and levels of engagement. By addressing these research gaps, scholars can contribute to a more comprehensive understanding of how facilitation can be effectively leveraged, enhanced by AI or combined with different methods and technologies to upgrade sensemaking and consensus-building in diverse online and offline environments.

As deliberative processes continue to gain traction as a means of engaging citizens and other stakeholders in participatory self-governance, the insights from the reviewed literature can inform the design and implementation of new solutions. By carefully considering the role of facilitation and employing evidence-based approaches, practitioners can attain more alignment (across conflicting values) and make decisions with greater competence (at identifying and evaluating options) and robustness (to real-world complexity).

Loading...
highlight
Collect this post to permanently own it.
Harmonica News logo
Subscribe to Harmonica News and never miss a post.
#colab fellowship#facilitation#sensemaking
  • Loading comments...