TheDocumentation Index
Fetch the complete documentation index at: https://docs.swarms.world/llms.txt
Use this file to discover all available pages before exploring further.
SelfConsistencyAgent generates multiple independent responses to a given task and aggregates them into a single, consistent final answer. It leverages concurrent processing and employs a majority voting mechanism to ensure reliability.
Based on the research paper: Self-Consistency Improves Chain of Thought Reasoning in Language Models (Wang et al., 2022).
Class: SelfConsistencyAgent
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | "Self-Consistency-Agent" | Name of the agent |
description | str | "An agent that uses self consistency..." | Description of the agent’s purpose |
system_prompt | str | CONSISTENCY_SYSTEM_PROMPT | System prompt for the reasoning agent |
model_name | str | Required | The underlying language model to use |
num_samples | int | 5 | Number of independent responses to generate |
max_loops | int | 1 | Maximum number of reasoning loops per sample |
majority_voting_prompt | Optional[str] | majority_voting_prompt | Custom prompt for majority voting aggregation |
eval | bool | False | Enable evaluation mode for answer validation |
output_type | OutputType | "dict" | Format of the output |
random_models_on | bool | False | Enable random model selection for diversity |
Methods
| Method | Description | Returns |
|---|---|---|
run(task, img?, answer?) | Generates multiple responses and aggregates them | Union[str, Dict[str, Any]] |
aggregation_agent(responses, prompt?, model_name?) | Aggregates responses via majority voting | str |
check_responses_for_answer(responses, answer) | Checks if an answer appears in any response | bool |
batched_run(tasks) | Run the agent on multiple tasks in batch | List[Union[str, Dict[str, Any]]] |
Examples
Basic Usage
Evaluation Mode
Random Models for Diversity
Batch Processing
How It Works
- Generates Multiple Independent Responses: Creates several reasoning paths for the same problem
- Analyzes Consistency: Examines agreement among different reasoning approaches
- Aggregates Results: Uses majority voting or consensus building
- Produces Reliable Output: Delivers a final answer reflecting the most reliable consensus
ThreadPoolExecutor to generate multiple responses concurrently, improving performance while maintaining independence between reasoning paths.
Output Formats
"dict": Dictionary format with conversation history"str": Simple string output"list": List format"json": JSON formatted output
Best Practices
- Sample Size: Use 3-7 samples for most tasks; increase for critical decisions
- Model Selection: Choose models with strong reasoning capabilities
- Evaluation Mode: Enable for tasks with known correct answers
- Custom Prompts: Tailor majority voting prompts for specific domains
- Batch Processing: Use
batched_runfor multiple related tasks