whitejoce
MCP Server
whitejoce
public

DIY your AI agent

AI-powered Smart Terminal Assistant

Repository Info

5
Stars
0
Forks
5
Watchers
0
Issues
Python
Language
MIT License
License

About This Server

AI-powered Smart Terminal Assistant

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

# DIY Your AI Agent: AI-powered Smart Terminal Assistant  

<p align="center">
  <a href="./README_CN.md">δΈ­ζ–‡ζ–‡ζ‘£</a> |
  <a href="https://deepwiki.com/whitejoce/DIY-your-AI-agent">deepwiki</a> |
  <a href="https://github.com/whitejoce/mcp_agent">yet-another-mcp-implementation</a>
</p>


## πŸ”₯ Project Overview  

This project is an **AI-powered smart terminal assistant (Linux Agent) based on Large Language Models (LLM)**. It automatically analyzes user input, determines whether a terminal command needs to be executed, and returns results in **JSON format**, ensuring secure interactions.  

<img src="./img/test_en.png" alt="Example">

**Collaboration Process Between Conversation and Command Execution:**  
```plaintext
User Input β†’ LLM Intent Recognition β†’ Determine if Command Execution is Needed β†’  
   β†’ Yes: LLM Generates Command β†’ Execute Command β†’ Return Result β†’ Generate Natural Language Response  
   β†’ No: Directly Generate Natural Language Response  
```
**Key Features:**  
- πŸ–₯ **Intelligent Parsing**: Determines whether user requests involve file operations, process management, etc.  
- πŸ”’ **Security Protection**: Automatically prompts for confirmation when encountering dangerous commands (e.g., `rm -rf`).  
- πŸ”„ **Interactive Experience**: Provides structured JSON responses to ensure readability and scalability.  
- ⚑ **Instant Execution**: Automatically executes commands in the Bash terminal upon user confirmation and returns results.  

## πŸ›  Example Usage  

Start the interactive smart terminal:  
```bash
python agent_en.py
```
Then enter any Linux-related command, for example:  

```plaintext
Smart_Shell> Show the files in the current directory
```

**Agent Response (JSON Format):**  
```json
{
  "action": "execute_command",
  "command": "ls -l",
  "explanation": "Lists detailed information of the current directory"
}
```
If the command is potentially dangerous, such as `rm -rf`, the system will automatically prompt for a second confirmation.  

---

## πŸ“– How It Works  

This AI Agent operates based on **LLM + rule constraints**, with core logic including:  

1. **LLM-based reasoning**: Uses models like Qwen/Qwen2.5-7B-Instruct (or other LLMs) to analyze user input.  
2. **Rule-based guidance**: Ensures AI responses comply with predefined formats (e.g., JSON output).  
3. **Interactive mechanism**: Users manually confirm commands to prevent accidental operations.  
4. **Terminal execution**: Uses `subprocess.Popen()` to run Bash commands and return execution results.  

> The simplified version of the code (MVP): You can check `agent_mvp_en.py` to browse it.
---

## πŸš€ Quick Start  

### 1️⃣ **Environment Setup**  

Requires **Python 3.8+**, and install the necessary dependencies:  
```bash
pip install openai rich
```  

### 2️⃣ **Configure API**  

Modify `agent_en.py`:  
```python
url = "your_api_url"
api_key = "your_api_key"
```  

---

## πŸ’‘ How to Improve?  

If you want to make this Agent more intelligent and powerful, consider the following approaches:  

1️⃣ *Ask the Friendly AI* πŸ€–  
   - Engage in discussions with the model directly in the terminal to refine your ideas.  
   - Ask questions like: "How can I make the Agent more autonomous?" or "How can I optimize my terminal assistant?"  

2️⃣ **Model Context Protocol (MCP) πŸ—οΈ**  
   - MCP (Model Context Protocol) is an open protocol designed to seamlessly integrate Large Language Models (LLMs) with external data sources and tools.  
   - By standardizing interactions between AI systems and data sources, MCP enables models to generate more accurate and contextually relevant responses.  
   - Refer to the [MCP Documentation](https://modelcontextprotocol.io/introduction) and [FastMCP](https://gofastmcp.com/getting-started/welcome) to learn more.  

3️⃣ **AI-driven Automation Scripts πŸš€**  
   - Enable AI not just to execute commands but also to **invoke other automation scripts**, forming a more adaptive workflow.  
   - Write automation scripts for tasks like **batch file organization or automated log analysis**, and let AI call those `best practices` at appropriate times to achieve higher-level intelligent automation.  

By implementing these improvements, your AI terminal assistant can evolve beyond just a "command parser" into a truly intelligent Agent! πŸ’‘βœ¨  

---

## πŸ“œ License  

This project is licensed under the **MIT License**β€”feel free to modify and use it!  

## 🀝 Contributions  

Issues & PRs are welcome! If you have better ideas, feel free to contribute code!  

Quick Start

1

Clone the repository

git clone https://github.com/whitejoce/DIY-your-AI-agent
2

Install dependencies

cd DIY-your-AI-agent
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerwhitejoce
RepoDIY-your-AI-agent
Language
Python
LicenseMIT License
Last fetched8/8/2025

Recommended MCP Servers

πŸ’¬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
πŸ”—

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
πŸ•·οΈ

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers
⚑

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation