Posts Taged ai-automation

Vibe Coding with AI: Building an Automatic Zabbix Service Map

Vibe Coding with AI: Building an Automatic Zabbix Service Map

Good morning everyone! Dimitri Bellini here, back on the Quadrata channel, your spot for diving into the open-source world and IT topics I love – and hopefully, you do too!

If you enjoy these explorations, please give this video a thumbs up and subscribe if you haven’t already. Today, we’re venturing into the exciting intersection of coding, Artificial Intelligence, and Zabbix. I wanted to tackle a real-world challenge I faced: using AI to enhance Zabbix monitoring, specifically for event correlation.

Enter “Vibe Coding”: A Different Approach

Lately, there’s been a lot of buzz around “Vibe Coding.” What is it exactly? Honestly, it feels like a rather abstract concept! The general idea is to write code, perhaps in any language, by taking a more fluid, iterative approach. Instead of meticulous planning, designing flowcharts, and defining every function upfront, you start by explaining your goal to an AI and then, through a process of trial, error, and refinement (“kicks and punches,” as I jokingly put it), you arrive at a solution.

It’s an alternative path, potentially useful for those less familiar with specific languages, although I believe some foundational knowledge is still crucial to avoid ending up with a complete mess. It’s the method I embraced for the project I’m sharing today.

What You’ll Need for Vibe Coding:

  • Time and Patience: It’s an iterative process, sometimes frustrating!
  • Money: Accessing powerful AI models often involves API costs.
  • Tools: I used VS Code along with an AI coding assistant plugin. The transcript mentions “Cline” and “Roocode”; a popular and evolving option in this space is Continue.dev (which might be what was referred to, as tool names evolve). These tools integrate AI directly into the IDE.

The Zabbix Challenge: Understanding Service Dependencies

My initial goal was ambitious: leverage AI within Zabbix for better event correlation to pinpoint the root cause of problems faster. However, Zabbix presents some inherent challenges:

  • Standard Zabbix configurations (hosts, groups, tags) don’t automatically define the intricate dependencies *between* services running on different hosts.
  • Knowledge about these dependencies is often siloed within different teams in an organization, making manual mapping difficult and often incomplete.
  • Zabbix, by default, doesn’t auto-discover applications and their communication pathways across hosts.
  • Existing correlation methods (time-based, host groups, manually added tags) are often insufficient for complex scenarios.

Creating and maintaining a service map manually is incredibly time-consuming and struggles to keep up with dynamic environments. My objective became clear: find a way to automate service discovery and map the communications between them automatically.

My Goal: Smarter Event Correlation Through Auto-Discovery

Imagine a scenario with multiple Zabbix alerts. My ideal outcome was to automatically enrich these events with tags that reveal their relationships. For example, an alert on a CRM application could be automatically tagged as dependent on a specific database instance (DB Instance) because the system detected a database connection, or perhaps linked via an NFS share. This context is invaluable for root cause analysis.

My Vibe Coding Journey: Tools and Process

To build this, I leaned heavily on VS Code and the AI assistant plugin (let’s refer to it as Continue.dev for clarity, acknowledging the transcript’s terms “Cline/Clean”). The real power came from the Large Language Model (LLM) behind it.

AI Model Choice: Claude 3 Sonnet

While local, open-source models like Llama variants exist, I found they often lack the scale or require prohibitive resources for complex coding tasks. The most effective solution for me was using Claude 3 Sonnet via its API (provided by Anthropic). It performed exceptionally well, especially with the “tool use” features supported by Continue.dev, which seemed more effective than with other models I considered.

I accessed the API via OpenRouter, a handy service that acts as a broker for various AI models. This provides flexibility, allowing you to switch models without managing separate accounts and billing with each provider (like Anthropic, Google, OpenAI).

Lessons Learned: Checkpoints and Context Windows

  • Use Checkpoints! Continue.dev offers a “Checkpoint” feature. Vibe coding can lead you down wrong paths. Checkpoints let you revert your codebase. I learned the hard way that this feature relies on Git. I didn’t have it set up initially and had to restart significant portions of work. My advice: Enable Git and use checkpoints!
  • Mind the Context Window: When interacting with the AI, the entire conversation history (the context) is crucial. If the context window of the model is too small, it “forgets” earlier parts of the code or requirements, leading to errors and inconsistencies. Claude 3 Sonnet has a reasonably large context window, which was essential for this project’s success.

The Result: A Dynamic Service Map Application

After about three hours of work and roughly $10-20 in API costs (it might have been more due to some restarts!), I had a working proof-of-concept application. Here’s what it does:

  1. Connects to Zabbix: It fetches the list of hosts monitored by my Zabbix server.
  2. Discovers Services & Connections: For selected hosts, it retrieves information about running services and their network connections.
  3. Visualizes Dependencies: It generates a dynamic, interactive map showing the hosts and the communication links between them.

The “Magic Trick”: Using Netstat

How did I achieve the automatic discovery? The core mechanism is surprisingly simple, albeit a bit brute-force. I configured a Zabbix item on all relevant hosts to run the command:

netstat -ltunpa

This command provides a wealth of information about listening ports (services) and established network connections, including the programs associated with them. I added some preprocessing steps (initially aiming for CSV, though the core data comes from netstat) to make the data easier for the application to parse.

Live Demo Insights

In the video, I demonstrated the application live. It correctly identified:

  • My Zabbix server host.
  • Another monitored host (Graph Host).
  • My own machine connecting via SSH to the Zabbix host (shown as an external IP since it’s not monitored by Zabbix).
  • Connections between the hosts, such as the Zabbix agent communication (port 10050) and web server access (port 80).
  • Clicking on hosts or connections reveals more details, like specific ports involved.

While the visual map is impressive (despite some minor graphical glitches that are typical of the rapid Vibe Coding process!), the truly valuable output is the underlying relationship data. This data is the key to achieving the original goal: enriching Zabbix events.

Next Steps and Your Thoughts?

This application is a proof-of-concept, demonstrating the feasibility of automatic service discovery using readily available data (like netstat output) collected by Zabbix. The “wow effect” of the map is nice, but the real potential lies in feeding this discovered dependency information back into Zabbix.

My next step, time permitting, is to tackle the event correlation phase – using these discovered relationships to automatically tag Zabbix problems, making root cause analysis much faster and more intuitive.

What do you think? I’d love to hear your thoughts, ideas, and suggestions in the comments below!

  • Have you tried Vibe Coding or similar AI-assisted development approaches?
  • Do you face similar challenges with service dependency mapping in Zabbix or other monitoring tools?
  • Are there specific use cases you’d like me to explore further?

Don’t forget to like this post if you found it interesting, share it with others who might benefit, and subscribe to the Quadrata YouTube channel for more content like this!

You can also join the discussion on the ZabbixItalia Telegram Channel.

Thanks for reading, have a great week, and I’ll see you in the next video!

Bye from Dimitri.

Read More
Automating My Video Workflow with N8N and AI: A Real-World Test

Automating My Video Workflow with N8N and AI: A Real-World Test

Good morning everyone, Dimitri Bellini here! Welcome back to Quadrata, my channel dedicated to the open-source world and the IT topics I find fascinating – and hopefully, you do too.

This week, I want to dive back into artificial intelligence, specifically focusing on a tool we’ve touched upon before: N8N. But instead of just playing around, I wanted to tackle a real problem I face every week: automating the content creation that follows my video production.

The Challenge: Bridging the Gap Between Video and Text

Making videos weekly for Quadrata is something I enjoy, but the work doesn’t stop when the recording ends. There’s the process of creating YouTube chapters, writing blog posts, crafting LinkedIn announcements, and more. These tasks, while important, can be time-consuming. My goal was to see if AI, combined with a powerful workflow tool, could genuinely simplify these daily (or weekly!) activities.

Could I automatically generate useful text content directly from my video’s subtitles? Let’s find out.

The Toolkit: My Automation Stack

To tackle this, I assembled a few key components:

  • N8N: An open-source workflow automation tool that uses a visual, node-based interface. It’s incredibly versatile and integrates with countless services. We’ll run this using Docker/Docker Compose.
  • AI Models: I experimented with two approaches:

    • Local AI with Ollama: Using Ollama to run models locally, specifically testing Gemma 3 (27B parameters). The latest Ollama release (0.1.60 at the time of recording, though versions update) offers better support for models like Gemma.
    • Cloud AI with Google AI Studio: Leveraging the power of Google’s models via their free API tier, primarily focusing on Gemini 2.5 Pro due to its large context window and reasoning capabilities.

  • Video Transcripts: The raw material – the subtitles generated for my YouTube videos.

Putting it to the Test: Automating Video Tasks with N8N

I set up an N8N workflow designed to take my video transcript and process it through AI to generate different outputs. Here’s how it went:

1. Getting the Transcript

The first step was easy thanks to the N8N community. I used a community node called “YouTube Transcript” which, given a video URL, automatically fetches the subtitles. You can find and install community nodes easily via the N8N settings.

2. Generating YouTube Chapters

This was my first major test. I needed the AI to analyze the transcript and identify logical sections, outputting them in the standard YouTube chapter format (00:00:00 - Chapter Title).

  • Local Attempt (Ollama + Gemma 3): I configured an N8N “Basic LLM Chain” node to use my local Ollama instance running Gemma 3. I set the context length to 8000 tokens and the temperature very low (0.1) to prevent creativity and stick to the facts. The prompt was carefully crafted to explain the desired format, including examples.

    Result: Disappointing. While it generated *some* chapters, it stopped very early in the video (around 6 minutes for a 25+ minute video), missing the vast majority of the content. Despite the model’s theoretical capabilities, it failed this task with the given transcript length and my hardware (RTX 8000 GPUs – good, but maybe not enough or Ollama/model limitations).

  • Cloud Attempt (Google AI Studio + Gemini 2.5 Pro): I switched the LLM node to use the Google Gemini connection, specifically targeting Gemini 2.5 Pro with a temperature of 0.2.

    Result: Much better! Gemini 2.5 Pro processed the entire transcript and generated accurate, well-spaced chapters covering the full length of the video. Its larger context window and potentially more advanced reasoning capabilities handled the task effectively.

For chapter generation, the cloud-based Gemini 2.5 Pro was the clear winner in my tests.

3. Crafting the Perfect LinkedIn Post

Next, I wanted to automate the announcement post for LinkedIn. Here, the prompt engineering became even more crucial. I didn’t just want a generic summary; I wanted it to sound like *me*.

  • Technique: I fed the AI (Gemini 2.5 Pro again, given the success with chapters) a detailed prompt that included:

    • The task description (create a LinkedIn post).
    • The video transcript as context.
    • Crucially: Examples of my previous LinkedIn posts. This helps the AI learn and mimic my writing style and tone.
    • Instructions on formatting and including relevant hashtags.
    • Using N8N variables to insert the specific video link dynamically.

  • Result: Excellent! The generated post was remarkably similar to my usual style, captured the video’s essence, included relevant tags, and was ready to be published (with minor review).

4. Automating Blog Post Creation

The final piece was generating a draft blog post directly from the transcript.

  • Technique: Similar to the LinkedIn post, but with different requirements. The prompt instructed Gemini 2.5 Pro to:

    • Generate content in HTML format for easy pasting into my blog.
    • Avoid certain elements (like quotation marks unless necessary).
    • Recognize and correctly format specific terms (like “Quadrata”, my name “Dimitri Bellini”, or the “ZabbixItalia Telegram Channel” – https://t.me/zabbixitalia).
    • Structure the text logically with headings and paragraphs.
    • Include basic SEO considerations.

  • Result: Success again! While it took a little longer to generate (likely due to the complexity and length), the AI produced a well-structured HTML blog post draft based on the video content. It correctly identified and linked the channels mentioned and formatted the text as requested. This provides a fantastic starting point, saving significant time.

Key Takeaways and Challenges

This experiment highlighted several important points:

  • Prompt Engineering is King: The quality of the AI’s output is directly proportional to the quality and detail of your prompt. Providing examples, clear formatting instructions, and context is essential. Using AI itself (via web interfaces) to help refine prompts is a valid strategy!
  • Cloud vs. Local AI Trade-offs:

    • Cloud (Gemini 2.5 Pro): Generally more powerful, handled long contexts better in my tests, easier setup (API key). However, subject to API limits (even free tiers have them, especially for frequent/heavy use) and potential costs.
    • Local (Ollama/Gemma 3): Full control, no API limits/costs (beyond hardware/electricity). However, requires capable hardware (especially GPU RAM for large contexts/models), and smaller models might struggle with complex reasoning or very long inputs. Performance was insufficient for my chapter generation task in this test.

  • Model Capabilities Matter: Gemini 2.5 Pro’s large context window and reasoning seemed better suited for processing my lengthy video transcripts compared to the 27B parameter Gemma 3 model run locally (though further testing with different local models or configurations might yield different results).
  • Temperature Setting: Keeping the temperature low (e.g., 0.1-0.2) is vital for tasks requiring factual accuracy and adherence to instructions, minimizing AI “creativity” or hallucination.
  • N8N is Powerful: It provides the perfect framework to chain these steps together, handle variables, connect to different services (local or cloud), and parse outputs (like the Structured Output Parser node for forcing JSON).

Conclusion and Next Steps

Overall, I’m thrilled with the results! Using N8N combined with a capable AI like Google’s Gemini 2.5 Pro allowed me to successfully automate the generation of YouTube chapters, LinkedIn posts, and blog post drafts directly from my video transcripts. While the local AI approach didn’t quite meet my needs for this specific task *yet*, the cloud solution provided a significant time-saving and genuinely useful outcome.

The next logical step is to integrate the final publishing actions directly into N8N using its dedicated nodes for YouTube (updating descriptions with chapters) and LinkedIn (posting the generated content). This would make the process almost entirely hands-off after the initial video upload.

This is a real-world example of how AI can move beyond novelty and become a practical tool for automating tedious tasks. It’s not perfect, and requires setup and refinement, but the potential to streamline workflows is undeniable.

What do you think? Have you tried using N8N or similar tools for AI-powered automation? What are your favourite use cases? Let me know in the comments below! And if you found this interesting, give the video a thumbs up and consider subscribing to Quadrata for more content on open source and IT.

Thanks for reading, and see you next week!

Bye everyone,
Dimitri

Read More