Compare commits

...

6 Commits

Author SHA1 Message Date
Jordan Diaz
72da3b7659 Soporte base_url custom en Claude adapter (MiniMax Anthropic-compatible)
MiniMax tiene endpoint compatible con Anthropic API en
https://api.minimax.io/anthropic/v1. Nueva variable
AGENTIC_ANTHROPIC_BASE_URL para usarlo.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 10:42:40 +00:00
Jordan Diaz
00c41fedb2 Soporte base_url custom en OpenAI adapter (MiniMax, DeepInfra, etc.)
Nueva variable AGENTIC_OPENAI_BASE_URL para proveedores compatibles
con OpenAI API (MiniMax, DeepInfra, Together, etc.).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 10:38:11 +00:00
Jordan Diaz
a86445f91a Fix historial: marcar como contexto pasado, no como nueva petición
El modelo repetía tareas anteriores porque el historial se
reconstruía como mensajes user/assistant que parecían peticiones
nuevas. Ahora el historial va como un bloque de contexto marcado
explícitamente con [HISTORIAL — NO ejecutar de nuevo].

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 10:30:13 +00:00
Jordan Diaz
a9fbd01b5d Fix Claude adapter: convertir mensajes OpenAI→Claude format
- role=tool → role=user con tool_result blocks
- assistant con tool_calls → assistant con tool_use blocks
- Merge mensajes consecutivos del mismo role (Claude requiere alternancia)
- Capturar input_tokens del evento message_start

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 10:22:35 +00:00
Jordan Diaz
184486b62b Context debug: guardar system_prompt + messages completos del último build
El endpoint /context-debug ahora devuelve full_context con el
system_prompt y messages exactos enviados al modelo.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 09:19:16 +00:00
Jordan Diaz
bc6ad3bcec Auto-load knowledge base al arrancar el servicio
Extraída lógica de carga a _load_knowledge_from_dir() reutilizable.
Se llama automáticamente en el lifespan después de set_dependencies().
Si falla, solo loguea warning — no bloquea el arranque.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 09:10:49 +00:00
8 changed files with 205 additions and 57 deletions

View File

@@ -115,7 +115,7 @@ API server-side para operaciones de base de datos. Disponible en todos los hooks
### Read — `CmsApi::get()`
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schemas/<nombre_de_tabla>.ini.php
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schema/<nombre_de_tabla>.ini.php
```php
// Todos los registros
@@ -167,7 +167,7 @@ $datos = CmsApi::get("productos", "", "", "", [
### Insert — `CmsApi::insert()`
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schemas/<nombre_de_tabla>.ini.php
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schema/<nombre_de_tabla>.ini.php
```php
// Un registro
@@ -200,7 +200,7 @@ CmsApi::insert('productos',
### Update — `CmsApi::update()`
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schemas/<nombre_de_tabla>.ini.php
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schema/<nombre_de_tabla>.ini.php
```php
// Con condición string
@@ -227,7 +227,7 @@ CmsApi::update('productos', ["activo" => 0], "precio < 50");
### Delete — `CmsApi::delete()`
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schemas/<nombre_de_tabla>.ini.php
## IMPORTANTE : Las tablas y nombres de campos puedes extraerlas de los esquemas en cms/data/schema/<nombre_de_tabla>.ini.php
```php
CmsApi::delete('productos', "num=5");

View File

@@ -17,10 +17,14 @@ logger = logging.getLogger(__name__)
class ClaudeAdapter(ModelAdapter):
"""Adapter for the Anthropic Claude API."""
def __init__(self, api_key: str | None = None) -> None:
self._client = anthropic.AsyncAnthropic(
api_key=api_key or settings.anthropic_api_key,
)
def __init__(self, api_key: str | None = None, base_url: str | None = None) -> None:
kwargs: dict[str, Any] = {
"api_key": api_key or settings.anthropic_api_key,
}
url = base_url or settings.anthropic_base_url
if url:
kwargs["base_url"] = url
self._client = anthropic.AsyncAnthropic(**kwargs)
# ------------------------------------------------------------------
# Streaming
@@ -38,7 +42,7 @@ class ClaudeAdapter(ModelAdapter):
temperature=settings.temperature,
)
# Separate system message
# Separate system message and convert OpenAI format to Claude format
system_content = ""
api_messages: list[dict[str, Any]] = []
for m in messages:
@@ -46,6 +50,7 @@ class ClaudeAdapter(ModelAdapter):
system_content = m["content"]
else:
api_messages.append(m)
api_messages = self._convert_messages(api_messages)
kwargs: dict[str, Any] = {
"model": config.model_id or settings.default_model_id,
@@ -62,8 +67,14 @@ class ClaudeAdapter(ModelAdapter):
current_tool_id = ""
current_tool_name = ""
accumulated_args = ""
input_tokens = 0
async for event in stream:
if event.type == "message_start" and hasattr(event, "message"):
usage = getattr(event.message, "usage", None)
if usage:
input_tokens = getattr(usage, "input_tokens", 0)
if event.type == "content_block_start":
block = event.content_block
if block.type == "tool_use":
@@ -103,12 +114,12 @@ class ClaudeAdapter(ModelAdapter):
continue
if event.type == "message_delta":
output_tokens = getattr(event.usage, "output_tokens", 0) if event.usage else 0
yield StreamChunk(
finish_reason=event.delta.stop_reason or "",
usage={
"output_tokens": getattr(
event.usage, "output_tokens", 0
)
"input_tokens": input_tokens,
"output_tokens": output_tokens,
},
)
@@ -135,6 +146,7 @@ class ClaudeAdapter(ModelAdapter):
system_content = m["content"]
else:
api_messages.append(m)
api_messages = self._convert_messages(api_messages)
kwargs: dict[str, Any] = {
"model": config.model_id or settings.default_model_id,
@@ -186,6 +198,103 @@ class ClaudeAdapter(ModelAdapter):
# Helpers
# ------------------------------------------------------------------
@staticmethod
def _convert_messages(messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Convert OpenAI-format messages to Claude format.
- role=tool → role=user with tool_result content blocks
- assistant with tool_calls → assistant with tool_use content blocks
- Consecutive same-role messages get merged (Claude requires alternating)
"""
converted: list[dict[str, Any]] = []
for m in messages:
role = m.get("role", "")
if role == "tool":
# Convert to user message with tool_result block
block = {
"type": "tool_result",
"tool_use_id": m.get("tool_call_id", ""),
"content": m.get("content", ""),
}
if m.get("is_error"):
block["is_error"] = True
# Merge with previous user message if exists
if converted and converted[-1]["role"] == "user":
content = converted[-1]["content"]
if isinstance(content, str):
converted[-1]["content"] = [{"type": "text", "text": content}, block]
elif isinstance(content, list):
content.append(block)
else:
converted[-1]["content"] = [block]
else:
converted.append({"role": "user", "content": [block]})
elif role == "assistant" and "tool_calls" in m:
# Convert tool_calls to tool_use content blocks
blocks: list[dict[str, Any]] = []
text = m.get("content", "")
if text:
blocks.append({"type": "text", "text": text})
for tc in m["tool_calls"]:
func = tc.get("function", {})
args_str = func.get("arguments", "{}")
try:
args = json.loads(args_str) if isinstance(args_str, str) else args_str
except (json.JSONDecodeError, TypeError):
args = {}
blocks.append({
"type": "tool_use",
"id": tc.get("id", ""),
"name": func.get("name", ""),
"input": args,
})
# Merge with previous assistant if exists
if converted and converted[-1]["role"] == "assistant":
prev = converted[-1]["content"]
if isinstance(prev, str):
converted[-1]["content"] = [{"type": "text", "text": prev}] + blocks
elif isinstance(prev, list):
prev.extend(blocks)
else:
converted[-1]["content"] = blocks
else:
converted.append({"role": "assistant", "content": blocks})
elif role == "assistant":
content = m.get("content", "")
# Merge with previous assistant
if converted and converted[-1]["role"] == "assistant":
prev = converted[-1]["content"]
if isinstance(prev, str):
converted[-1]["content"] = prev + "\n" + content if content else prev
elif isinstance(prev, list) and content:
prev.append({"type": "text", "text": content})
else:
converted.append({"role": "assistant", "content": content})
elif role == "user":
content = m.get("content", "")
# Merge with previous user
if converted and converted[-1]["role"] == "user":
prev = converted[-1]["content"]
if isinstance(prev, str) and isinstance(content, str):
converted[-1]["content"] = prev + "\n" + content
elif isinstance(prev, list) and isinstance(content, str):
prev.append({"type": "text", "text": content})
elif isinstance(prev, str) and isinstance(content, list):
converted[-1]["content"] = [{"type": "text", "text": prev}] + content
elif isinstance(prev, list) and isinstance(content, list):
prev.extend(content)
else:
converted.append({"role": role, "content": content})
else:
converted.append(m)
return converted
@staticmethod
def _format_tools(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Convert internal tool definitions to Anthropic tool format."""

View File

@@ -17,10 +17,14 @@ logger = logging.getLogger(__name__)
class OpenAIAdapter(ModelAdapter):
"""Adapter for the OpenAI API (GPT-4o, o1, etc.)."""
def __init__(self, api_key: str | None = None) -> None:
self._client = AsyncOpenAI(
api_key=api_key or settings.openai_api_key,
)
def __init__(self, api_key: str | None = None, base_url: str | None = None) -> None:
kwargs: dict[str, Any] = {
"api_key": api_key or settings.openai_api_key,
}
url = base_url or settings.openai_base_url
if url:
kwargs["base_url"] = url
self._client = AsyncOpenAI(**kwargs)
# ------------------------------------------------------------------
# Streaming

View File

@@ -309,11 +309,13 @@ async def get_context_debug(session_id: str) -> dict[str, Any]:
history = ctx_engine.get_debug_history(session_id)
last = ctx_engine.get_last_context_debug(session_id)
full_context = ctx_engine.get_last_full_context(session_id)
return {
"session_id": session_id,
"total_builds": len(history),
"last_build": last,
"full_context": full_context,
"history": history,
}
@@ -326,22 +328,18 @@ class LoadKnowledgeRequest(BaseModel):
docs_path: str = "docs"
@router.post("/knowledge/load")
async def load_knowledge(body: LoadKnowledgeRequest) -> dict[str, Any]:
"""Load markdown docs from a directory into the knowledge base.
Generates embeddings for semantic search via OpenAI text-embedding-3-small.
"""
async def _load_knowledge_from_dir(docs_path: str = "docs") -> dict[str, Any]:
"""Load knowledge docs from directory. Used by endpoint and startup."""
memory = _deps.get("memory_store")
if not memory:
raise HTTPException(status_code=501, detail="Memory store not available")
return {"status": "error", "message": "Memory store not available"}
docs_dir = pathlib.Path(body.docs_path)
docs_dir = pathlib.Path(docs_path)
if not docs_dir.is_absolute():
docs_dir = pathlib.Path(__file__).resolve().parent.parent.parent / body.docs_path
docs_dir = pathlib.Path(__file__).resolve().parent.parent.parent / docs_path
if not docs_dir.is_dir():
raise HTTPException(status_code=400, detail=f"Directory not found: {docs_dir}")
return {"status": "error", "message": f"Directory not found: {docs_dir}"}
# Read all docs
docs_data: list[tuple[str, str, str, str, list[str]]] = [] # (id, title, content, summary, tags)
@@ -415,6 +413,18 @@ async def load_knowledge(body: LoadKnowledgeRequest) -> dict[str, Any]:
}
@router.post("/knowledge/load")
async def load_knowledge(body: LoadKnowledgeRequest) -> dict[str, Any]:
"""Load markdown docs from a directory into the knowledge base.
Generates embeddings for semantic search via OpenAI text-embedding-3-small.
"""
result = await _load_knowledge_from_dir(body.docs_path)
if result.get("status") == "error":
raise HTTPException(status_code=501, detail=result["message"])
return result
@router.get("/knowledge")
async def list_knowledge() -> dict[str, Any]:
"""List all documents in the knowledge base."""

View File

@@ -29,7 +29,9 @@ class Settings(BaseSettings):
# --- Model providers ---
anthropic_api_key: str = ""
anthropic_base_url: str = "" # Custom base URL (for MiniMax Anthropic-compatible, etc.)
openai_api_key: str = ""
openai_base_url: str = "" # Custom base URL (for MiniMax, DeepInfra, etc.)
default_model_provider: str = "claude"
default_model_id: str = "claude-sonnet-4-20250514"
max_tokens: int = 4096

View File

@@ -52,6 +52,8 @@ class ContextEngine:
# Debug history: last N context builds per session
self._history: dict[str, list[dict[str, Any]]] = defaultdict(list)
self._max_history = 20
# Full context of the LAST build per session (not accumulated)
self._last_full_context: dict[str, dict[str, Any]] = {}
# ------------------------------------------------------------------
# Public — build context for a model call
@@ -117,6 +119,14 @@ class ContextEngine:
total_token_estimate=total_tokens,
)
# Guardar contexto completo del último build (solo el último por sesión)
self._last_full_context[session.session_id] = {
"system_prompt": system_prompt,
"messages": messages,
"total_tokens": total_tokens,
"timestamp": time.time(),
}
# --- Debug: log and store context build ---
section_summary = []
for s in sections:
@@ -169,6 +179,10 @@ class ContextEngine:
history = self._history.get(session_id, [])
return history[-1] if history else None
def get_last_full_context(self, session_id: str) -> dict[str, Any] | None:
"""Return the full context (system_prompt + messages) of the last build."""
return self._last_full_context.get(session_id)
def rehydrate_artifact(
self,
artifact: ArtifactSummary,
@@ -543,22 +557,36 @@ class ContextEngine:
messages: list[dict[str, Any]] = []
# Include previous task exchanges as conversation history
# (so the model remembers what was said in earlier turns)
for entry in session.task_history[-10:]:
summary = entry.get("summary", "")
objective = entry.get("objective", "")
if summary.startswith("User: "):
# Direct response format: "User: X → Agent: Y"
parts = summary.split(" → Agent: ", 1)
user_msg = objective or parts[0].replace("User: ", "", 1)
agent_msg = parts[1] if len(parts) > 1 else summary
messages.append({"role": "user", "content": user_msg})
messages.append({"role": "assistant", "content": agent_msg})
elif objective:
# Task with tools — include as compact exchange
messages.append({"role": "user", "content": objective})
messages.append({"role": "assistant", "content": summary[:500] if summary else "Tarea completada."})
# Include previous task exchanges as compact conversation history
if session.task_history:
history_lines = ["[HISTORIAL DE CONVERSACIÓN ANTERIOR — NO ejecutar de nuevo, solo contexto]"]
for entry in session.task_history[-10:]:
objective = entry.get("objective", "")[:200]
summary = entry.get("summary", "")
key_data = entry.get("key_data", {})
tools = entry.get("tools_used", [])
history_lines.append(f"Usuario pidió: {objective}")
if tools:
history_lines.append(f" Tools usadas: {', '.join(tools[:5])}")
if key_data:
kd_parts = []
for table, nums in key_data.get("tables", {}).items():
kd_parts.append(f"{table}: records {nums}")
if key_data.get("sections"):
kd_parts.append(f"sections: {key_data['sections'][:5]}")
if key_data.get("modules"):
kd_parts.append(f"modules: {key_data['modules'][:5]}")
if kd_parts:
history_lines.append(f" Datos clave: {'; '.join(kd_parts)}")
# Extract agent response from summary
if " → Agent: " in summary:
agent_part = summary.split(" → Agent: ", 1)[1][:200]
history_lines.append(f" Resultado: {agent_part}")
history_lines.append("")
messages.append({"role": "user", "content": "\n".join(history_lines)})
messages.append({"role": "assistant", "content": "Entendido, tengo el contexto del historial. ¿En qué puedo ayudarte ahora?"})
# Current user message
messages.append({"role": "user", "content": user_content})

View File

@@ -95,6 +95,14 @@ async def lifespan(app: FastAPI):
mcp_registry=mcp_registry,
)
# 7. Auto-load knowledge base
from .api.routes import _load_knowledge_from_dir
try:
kb_result = await _load_knowledge_from_dir("docs")
logger.info("Knowledge auto-loaded: %d docs, embeddings=%s", kb_result.get("count", 0), kb_result.get("embeddings", False))
except Exception as e:
logger.warning("Failed to auto-load knowledge: %s", e)
logger.info("All systems initialized. Serving on %s:%d", settings.host, settings.port)
yield

View File

@@ -98,21 +98,6 @@ Rule of thumb:
See [docs/hooks-and-api.md](docs/hooks-and-api.md) for usage.
## Database Access
When the site is running in Docker, you can connect to the database:
- **Host:** `127.0.0.1`
- **Port:** Check `.docker/docker-compose.yml` for the mapped port (usually 3307+)
- **Credentials:** Read from `.docker/.env`:
- `DB_USERNAME`
- `DB_PASSWORD`
- `DB_DATABASE`
```bash
docker exec -it dw-<project-name>-db mysql -u root -p<password> <database>
```
**Important:** Table names in CmsApi/Twig do NOT use the `cms_` prefix. The primary key is always `num`, never `id`.
## Acai Core (web-base)
@@ -138,6 +123,8 @@ Do NOT modify web-base files — they are shared across all projects.
11. Twig concatenation uses `~` operator: `'value=' ~ variable`
12. `enlace` (link) fields already include slashes — **NEVER modify an existing enlace** unless explicitly asked
13. **NEVER modify the `controlador` field** of existing records — it defines whether a page is Builder or Standard
14. All CmsApi/Twig variables and field names should be extracted from the schemas in `cms/data/schema/<nombre_de_tabla>.ini.php` before use. Do not guess variable names or field types.
15. NEVER make up a field or table name. Always check the schema files in `cms/data/schema/` to confirm field names and types before using them.
## MCP Tools