Posts Taged open-source

Exploring Zabbix 7.4 Beta 1: What’s New and What I’m Hoping For

Exploring Zabbix 7.4 Beta 1: What’s New and What I’m Hoping For

Good morning everyone! Dimitri Bellini here, back on the Quadrata channel – your spot for everything Open Source and the IT topics I find fascinating (and hopefully, you do too!). Thanks for tuning in each week. If you haven’t already, please consider subscribing and hitting that like button; it really helps the channel!

This week, we’re diving into something exciting: the latest Zabbix 7.4 Beta 1 release. This is a short-term support (STS) version, meaning it’s packed with new features that pave the way for the next Long-Term Support (LTS) release, expected later this year. With Beta 1 out and release candidates already tagged in the repositories, the official 7.4 release feels very close – likely within Q2 2024. So, let’s break down what’s new based on this first beta.

Key Features Introduced in Zabbix 7.4 Beta 1

While we don’t have that dream dashboard I keep showing (maybe one day, Zabbix team!), Beta 1 brings several practical and technical improvements.

Performance and Internals: History Cache Management

A significant technical improvement is the enhanced management of the history cache. Sometimes, items become disabled (manually or via discovery) but still occupy space in the cache, potentially causing issues. Zabbix 7.4 introduces:

  • Automatic Aging: Zabbix will now automatically clean up these inactive items from the cache.
  • Manual Aging Command: For environments with many frequently disabled objects, you can now manually trigger this cleanup at runtime using a command. This helps free up resources and maintain stability.
  • Cache Content Analysis: For troubleshooting, there are now better tools to analyze cache content and adjust log verbosity in real-time, which is invaluable in critical production environments.

UI and Widget Enhancements

  • Item History Widget Sorting: The Item History widget (introduced in 7.0) gets a much-needed update. When displaying logs, you can now choose to show the newest entries first, making log analysis much more intuitive than the old default (oldest first).
  • Test Item Value Copy Button: A small but incredibly useful UI tweak! When testing items, especially those returning large JSON payloads, you no longer need to manually select the text. There’s now a dedicated copy icon. Simple, but effective!
  • User Notification Management: Finally! Users can now manage their own notification media types (like email addresses) directly from their user settings via a dedicated menu. Previously, this required administrator intervention.

New Monitoring Capabilities

  • New ICMP Ping Item (`icmpping`): A new item key for ICMP checks includes a crucial `retry` option. This helps reduce noise and potential engine load caused by transient network issues. Instead of immediately flagging an object as unreachable/reachable and potentially triggering unnecessary actions or internal retries, you can configure it to try, say, 3 times before marking the item state as unreachable. This should lead to more stable availability monitoring.
  • New Timestamp Functions & Macros: We have new functions (like `item.history.first_clock`) that return timestamps of the oldest/newest values within an evaluation period. While the exact use case isn’t immediately obvious to me (perhaps related to upcoming event correlation or specific Windows monitoring scenarios?), having more tools for time-based analysis is interesting. Additionally, new built-in timestamp macros are available for use in notifications.

Major Map Enhancements

Maps receive some fantastic updates in 7.4, making them much more powerful and visually appealing:

  • Item Value Link Indicators: This is huge! Previously, link status (color/style) could only be tied to triggers. Now, you can base link appearance on:

    • Static: Just a simple visual link.
    • Trigger Status: The classic method.
    • Item Value: Define thresholds for numeric item values (e.g., bandwidth usage) or specific strings for text items (e.g., “on”/”off”) to change the link’s color and line style. This opens up possibilities for visualizing performance directly on the map without relying solely on triggers.

  • Auto-Hiding Labels: Tired of cluttered maps with overlapping labels? You can now set labels to hide by default and only appear when you hover over the element. This drastically improves readability for complex maps.
  • Scalable Background Images: Map background images will now scale proportionally to fit the map widget size, preventing awkward cropping or stretching.

One thing I’d still love to see, maybe before the final 7.4 release, is the ability to have multiple links between two map objects (e.g., representing aggregated network trunks).

New Templates and Integrations

Zabbix continues to expand its out-of-the-box monitoring:

  • Pure Storage FlashArray Template: Monitoring for this popular enterprise storage solution is now included.
  • Microsoft SQL for Azure Template: Enhanced cloud monitoring capabilities.
  • MySQL/Oracle Agent 2 Improvements: Simplifications for running custom queries directly via the Zabbix Agent 2 plugins.

What I’m Hoping For (Maybe 7.4, Maybe Later?)

Looking at the roadmap and based on some code movements I’ve seen, here are a couple of features I’m particularly excited about and hope to see soon, possibly even in 7.4:

  • Nested Low-Level Discovery (LLD): This would be a game-changer for dynamic environments. Imagine discovering databases, and then, as a sub-task, discovering the tables within each database using discovery prototypes derived from the parent discovery. This structured approach would simplify complex auto-discovery scenarios (databases, Kubernetes, cloud resources). I have a strong feeling this might make it into 7.4.
  • Event Correlation: My big dream feature! The ability to intelligently link related events, identifying the root cause (e.g., a failed switch) and suppressing the symptoms (all the hosts behind it becoming unreachable). This would significantly reduce alert noise and help focus on the real problem. It’s listed on the roadmap, but whether it lands in 7.4 remains to be seen.
  • Alternative Backend Storage: Also on the roadmap is exploring alternative backend solutions beyond traditional SQL databases (like potentially TimescaleDB alternatives, though not explicitly named). This is crucial groundwork for Zabbix 8.0 and beyond, especially for handling the massive data volumes associated with full observability (metrics, logs, traces).
  • New Host Wizard: A guided wizard for adding new hosts is also in development, which should improve the user experience.

Wrapping Up

Zabbix 7.4 is shaping up to be another solid release, bringing valuable improvements to maps, performance, usability, and monitoring capabilities. The map enhancements based on item values and the history cache improvements are particularly noteworthy from this Beta 1.

I’ll definitely keep you updated as we get closer to the final release and if features like Nested LLD or Event Correlation make the cut!

What do you think? Are these features useful for you? What are you hoping to see in Zabbix 7.4 or the upcoming Zabbix 8.0? Let me know in the comments below – I’m always curious to hear your thoughts and often pass feedback along (yes, I’m known for being persistent with the Zabbix team, like Alexey Vladyshev!).

Don’t forget to check out the Quadrata YouTube channel for more content like this.

And if you’re not already there, join the conversation in the ZabbixItalia Telegram Channel – it’s a great place to connect with other Italian Zabbix users.

That’s all for today. Thanks for reading, and I’ll catch you in the next one!

– Dimitri Bellini

Read More
Visualizing Your Infrastructure: A Deep Dive into Zabbix Maps

Visualizing Your Infrastructure: A Deep Dive into Zabbix Maps

Good morning everyone, Dimitri Bellini here, and welcome back to Quadrata – your go-to channel for open source and IT solutions! Today, I want to dive into a feature of our good friend Zabbix that we haven’t explored much yet: Zabbix Maps.

Honestly, I was recently working on some maps, and while it might not always be the most glamorous part of Zabbix, it sparked an idea: why not share what these maps are truly capable of, and perhaps more importantly, what they aren’t?

What Zabbix Maps REALLY Are: Your Digital Synoptic Panel

Think of Zabbix Maps as the modern, digital equivalent of those old-school synoptic panels with blinking lights. They provide a powerful graphical way to represent your infrastructure and its status directly within Zabbix. Here’s what you can achieve:

  • Real-time Host Status: Instantly see the overall health of your hosts based on whether they have active problems.
  • Real-time Event Representation: Visualize specific problems (triggers) directly on the map. Imagine a specific light turning red only when a critical service fails.
  • Real-time Item Metrics: Display actual data values (like temperature, traffic throughput, user counts) directly on your map, making data much more intuitive and visually appealing.

The core idea is to create a custom graphical overview tailored to your specific infrastructure, giving you an immediate understanding of what’s happening at a glance.

Clearing Up Misconceptions: What Zabbix Maps Are NOT

It’s crucial to understand the limitations to use maps effectively. Often, people hope Zabbix Maps will automatically generate network topology diagrams.

  • They are NOT Automatic Network Topology Maps: While you *could* manually build something resembling a network diagram, Zabbix doesn’t automatically discover devices and map their connections (who’s plugged into which switch port, etc.). Tools that attempt this often rely on protocols like Cisco’s CDP or the standard LLDP (both usually SNMP-based), which aren’t universally available across all devices. Furthermore, in large environments (think thousands of hosts and hundreds of switches), automatically generated topology maps quickly become an unreadable mess of tiny icons and overlapping lines. They might look cool initially but offer little practical value day-to-day.
  • They are NOT Application Performance Monitoring (APM) Relationship Maps (Yet!): Zabbix Maps don’t currently visualize the intricate relationships and data flows between different application components in the way dedicated APM tools do. While Zabbix is heading towards APM capabilities, the current map function isn’t designed for that specific purpose.

For the nitty-gritty details, I always recommend checking the official Zabbix documentation – it’s an invaluable resource.

Building Blocks of a Zabbix Map

When constructing your map, you have several element types at your disposal:

  • Host: Represents a monitored device. Its appearance can change based on problem severity.
  • Trigger: Represents a specific problem condition. You can link an icon’s appearance directly to a trigger’s state.
  • Map: Allows you to create nested maps. The icon for a sub-map can reflect the most severe status of the elements within it – great for drilling down!
  • Image: Use custom background images or icons to make your map visually informative and appealing.
  • Host Group: Automatically display all hosts belonging to a specific group within a defined area on the map.
  • Shape: Geometric shapes (rectangles, ellipses) that can be used for layout, grouping, or, importantly, displaying text and real-time data.
  • Link: Lines connecting elements. These can change color or style based on a trigger’s status, often used to represent connectivity or dependencies.

Zabbix also provides visual cues like highlighting elements with problems or showing small triangles to indicate a recent status change, helping you focus on what needs attention.

Bringing Maps to Life with Real-Time Data

One of the most powerful features is embedding live data directly onto your map. Instead of just seeing if a server is “up” or “down,” you can see its current CPU load, network traffic, or application-specific metrics.

This is typically done using Shapes and a specific syntax within the shape’s label. In Zabbix 6.x and later, the syntax looks something like this:

{?last(/Your Host Name/your.item.key)}

This tells Zabbix to display the last received value for the item your.item.key on the host named Your Host Name. You can add descriptive text around it, like:

CPU Load: {?last(/MyWebServer/system.cpu.load[,avg1])}

Zabbix is smart enough to often apply the correct unit (like Bps, %, °C) automatically if it’s defined in the item configuration.

Let’s Build a Simple Map (Quick Guide)

Here’s a condensed walkthrough based on what I demonstrated in the video (using Zabbix 6.4):

  1. Navigate to Maps: Go to Monitoring -> Maps.
  2. Create New Map: Click “Create map”. Give it a name (e.g., “YouTube Test”), set dimensions, and optionally choose a background image.

    • Tip: You can upload custom icons and background images under Administration -> General -> Images. I uploaded custom red/green icons and a background for the demo.

  3. Configure Map Properties: Decide on options like “Icon highlighting” (the colored border around problematic hosts) and “Mark elements on trigger status change” (the triangles for recent changes). You can also filter problems by severity or hide labels if needed. Click “Add”.
  4. Enter Constructor Mode: Open your newly created map and click “Constructor”.
  5. Add a Trigger-Based Icon:

    • Click “Add element” (defaults to a server icon).
    • Click the new element. Change “Type” to “Trigger”.
    • Under “Icons”, select your custom “green” icon for the “Default” state and your “red” icon for the “Problem” state.
    • Click “Add” next to “Triggers” and select the specific trigger you want this icon to react to.
    • Click “Apply”. Position the icon on your map.

  6. Add Real-Time Data Display:

    • Click “Add element” and select “Shape” (e.g., Rectangle).
    • Click the new shape. In the “Label” field, enter your data syntax, e.g., Temp: {?last(/quadrata-test-host/test.item)} (replace with your actual host and item key).
    • Customize font size, remove the border (set Border width to 0), etc.
    • Click “Apply”. Position the shape.
    • Important: In the constructor toolbar, toggle “Expand macros” ON to see the live data instead of the syntax string.

  7. Refine and Save: Adjust element positions (you might want to turn off “Snap to grid” for finer control). Remove default labels if they clutter the view (Map Properties -> Map element label type -> Nothing). Click “Update” to save your changes.

Testing with `zabbix_sender`

A fantastic tool for testing maps (especially with trapper items) is the zabbix_sender command-line utility. It lets you manually push data to Zabbix items.

Install the `zabbix-sender` package if you don’t have it. The basic syntax is:

zabbix_sender -z -s -k -o

For example:

zabbix_sender -z 192.168.1.100 -s quadrata-test-host -k test.item -o 25

Sending a value that crosses a trigger threshold will change your trigger-linked icon on the map. Sending a different value will update the real-time data display.

Wrapping Up

So, there you have it – a look into Zabbix Maps. They aren’t magic topology generators, but they are incredibly flexible and powerful tools for creating meaningful, real-time visual dashboards of your infrastructure’s health and performance. By combining different elements, custom icons, backgrounds, and live data, you can build truly informative synoptic views.

Don’t be afraid to experiment! Start simple and gradually add complexity as you get comfortable.

What are your thoughts on Zabbix Maps? Have you created any cool visualizations? Share your experiences or ask questions in the comments below!

If you found this helpful, please give the video a thumbs up, share it, and subscribe to Quadrata for more content on Zabbix and open source solutions.

Also, feel free to join the conversation in the Zabbix Italia Telegram channel – it’s a great community!

Thanks for reading, and I’ll see you in the next post!

– Dimitri Bellini

Read More
Unlock Your Documents Potential with Ragflow: An Open-Source RAG Powerhouse

Unlock Your Documents Potential with Ragflow: An Open-Source RAG Powerhouse

Good morning, everyone! Dimitri Bellini here, back on the Quadrata channel – your spot for diving into the exciting world of open source and IT tech. Today, we’re tackling a topic some of you have asked about: advanced solutions for interacting with your own documents using AI.

I sometimes wait to showcase software until it’s a bit more polished, and today, I’m excited to introduce a particularly interesting one: Ragflow.

What is RAG and Why Should You Care?

We’re diving back into the world of RAG solutions – Retrieval-Augmented Generation. It sounds complex, but the core idea is simple and incredibly useful: using your *own* documents (manuals, reports, notes, anything on your disk) as a private knowledge base for an AI.

Instead of relying solely on the general knowledge (and potential inaccuracies) of large language models (LLMs), RAG lets you get highly relevant, context-specific answers based on *your* information. This is a practical, powerful use case for AI, moving beyond generic queries to solve specific problems using local data.

Introducing Ragflow: A Powerful Open-Source RAG Solution

Ragflow (find it on GitHub!) stands out from other RAG tools I’ve explored. It’s not just a basic framework; it’s shaping up to be a comprehensive, business-oriented platform. Here’s why it caught my eye:

  • Open Source: Freely available and community-driven.
  • Complete Solution: Offers a wide range of features out-of-the-box.
  • Collaboration Ready: Designed for teams to work on shared knowledge bases.
  • Easy Installation: Uses Docker Compose for a smooth setup.
  • Local First: Integrates seamlessly with local LLM providers like Ollama (which I use).
  • Rapid Development: The team is actively adding features and improvements.
  • Advanced Techniques: Incorporates methods like Self-RAG and Raptor for better accuracy.
  • API Access: Allows integration with other applications.

Diving Deeper: How Ragflow Enhances RAG

Ragflow isn’t just about basic document splitting and embedding. It employs sophisticated techniques:

  • Intelligent Document Analysis: It doesn’t just grab text. Ragflow performs OCR and analyzes document structure (understanding tables in Excel, layouts in presentations, etc.) based on predefined templates. This leads to much better comprehension and more accurate answers.
  • Self-RAG: A framework designed to improve the quality and factuality of the LLM’s responses, reducing the chances of the AI “inventing” answers (hallucinations) when it doesn’t know something.
  • Raptor: This technique focuses on the document processing phase. For long, complex documents, Raptor builds a hierarchical summary or tree of concepts *before* chunking and embedding. This helps the AI maintain context and understand the overall topic better.

These aren’t trivial features; they represent significant steps towards making RAG systems more reliable and useful.

Getting Started: Installing Ragflow (Step-by-Step)

Installation is straightforward thanks to Docker Compose. Here’s how I got it running:

  1. Clone the Repository (Important Tip!): Use the `–branch` flag to specify a stable release version. This saved me some trouble during testing. Replace `release-branch-name` with the desired version (e.g., `0.7.0`).

    git clone --branch release-branch-name https://github.com/infiniflow/ragflow.git

  2. Navigate to the Docker Directory:

    cd ragflow/docker

  3. Make the Entrypoint Script Executable:

    chmod +x entrypoint.sh

  4. Start the Services: This will pull the necessary images (including Ragflow, MySQL, Redis, MinIO, Elasticsearch) and start the containers.

    docker-compose up -d

    Note: Be patient! The Docker images, especially the main Ragflow one, can be quite large (around 9GB in my tests), so ensure you have enough disk space.

Once everything is up, you can access the web interface (usually at `http://localhost:80` or check the Docker Compose file/logs for the exact port).

A Look Inside: Configuring and Using Ragflow

The web interface is clean and divided into key sections: Knowledge Base, Chat, File Manager, and Settings.

Setting Up Your AI Models (Ollama Example)

First, you need to tell Ragflow which AI models to use. Go to your profile settings -> Model Providers.

  • Click “Add Model”.
  • Select “Ollama”.
  • Choose the model type: “Chat” (for generating responses) or “Embedding” (for analyzing documents). You’ll likely need one of each.
  • Enter the **exact** model name as it appears in your Ollama list (e.g., `mistral:latest`, `nomic-embed-text:latest`).
  • Provide the Base URL for your Ollama instance (e.g., `http://your-ollama-ip:11434`).
  • Save the model. Repeat for your embedding model if it’s different. I used `nomic-embed-text` for embeddings and `weasel-lm-7b-v1-q5_k_m` (a fine-tuned model) for chat in my tests.

Creating and Populating a Knowledge Base (Crucial Settings)

This is where your documents live.

  • Create a new Knowledge Base and give it a name.
  • Before Uploading: Go into the KB settings. This is critical! Define:

    • Language: The primary language of your documents.
    • Chunking Method: How documents are split. Ragflow offers templates like “General”, “Presentation”, “Manual”, “Q&A”, “Excel”, “Resume”. Choose the one that best fits your content. I used “Presentation” for my Zabbix slides.
    • Embedding Model: Select the Ollama embedding model you configured earlier.
    • Raptor: Enable this for potentially better context handling on complex docs.

  • Upload Documents: Now you can upload files or entire directories.
  • Parse Documents: Click the “Parse” button next to each uploaded document. Ragflow will process it using the settings you defined (OCR, chunking, embedding, Raptor analysis). You can monitor the progress.

Building Your Chat Assistant

This connects your chat model to your knowledge base.

  • Create a new Assistant.
  • Give it a name and optionally an avatar.
  • Important: Set an “Empty Response” message (e.g., “I couldn’t find information on that in the provided documents.”). This prevents the AI from making things up.
  • Add a welcome message.
  • Enable “Show Citation”.
  • Link Knowledge Base: Select the KB you created.
  • Prompt Engine: Review the system prompt. The default is usually quite good, instructing the AI to answer based *only* on the documents.
  • Model Setting: Select the Ollama chat model you configured. Choose a “Work Mode” like “Precise” to encourage focused answers.
  • (Optional) Re-ranking Model: I skipped this in version 0.7 due to some issues, but it’s a feature to watch.
  • Confirm and save.

Putting Ragflow to the Test (Zabbix Example)

I loaded my Zabbix presentation slides and asked the assistant some questions:

  • Explaining Zabbix log file fields.
  • Identifying programming languages used in Zabbix components.
  • Differentiating between Zabbix Agent Passive and Active modes.
  • Describing the Zabbix data collection flow.

The results were genuinely impressive! Ragflow provided accurate, detailed answers, citing the specific slides it drew information from. There was only one minor point where I wasn’t entirely sure if the answer was fully grounded in the text or slightly inferred, but overall, the accuracy and relevance were excellent, especially considering it was analyzing presentation slides.

Integrating Ragflow with Other Tools via API

A standout feature is the built-in API. For each assistant you create, you can generate an API key. This allows external applications to query that specific assistant and its associated knowledge base programmatically – fantastic for building custom integrations.

Final Thoughts and Why Ragflow Stands Out

Ragflow is a compelling RAG solution. Its focus on accurate document analysis, integration of advanced techniques like Self-RAG and Raptor, ease of use via Docker and Ollama, and the inclusion of collaboration and API features make it feel like a mature, well-thought-out product, despite being relatively new.

While it’s still evolving (as seen with the re-ranking feature I encountered), it’s already incredibly capable and provides a robust platform for anyone serious about leveraging their own documents with AI.

What do you think? Have you tried Ragflow or other RAG solutions? What are your favourite use cases for chatting with your own documents?

Let me know in the comments below! I’m always keen to hear your experiences and suggestions for tools to explore.

Don’t forget to give this video a thumbs up if you found it helpful, and subscribe to the Quadrata channel for more open-source tech deep dives.

Also, if you’re interested in Zabbix, join our friendly community on Telegram: Zabbix Italia.

Thanks for watching, and see you next week!

– Dimitri Bellini

Read More
Automate Smarter, Not Harder: Exploring N8n for AI-Powered Workflows

Automate Smarter, Not Harder: Exploring N8n for AI-Powered Workflows

Good morning everyone! Dimitri Bellini here, back on Quadrata, my channel where we dive into the fascinating world of open source and IT. As I always say, I hope you find these topics as exciting as I do!

This week, we’re venturing back into the realm of artificial intelligence, but with a twist. We’ll be looking at an incredibly interesting, user-friendly, and – you guessed it – open-source tool called N8n (pronounced “N-eight-N”). While we’ve explored similar solutions before, N8n stands out with its vibrant community and powerful capabilities, especially its recent AI enhancements.

What is N8n and Why Should You Care?

At its core, N8n is a Workflow Automation Tool. It wasn’t born solely for AI; its primary goal is to help you automate sequences of tasks, connecting different applications and services together. Think of it as a visual way to build bridges between the tools you use every day.

Why opt for a tool like N8n instead of just writing scripts in Python or another language? The key advantage lies in maintainability and clarity. While scripts work, revisiting them months later often requires deciphering complex code. N8n uses a graphical user interface (GUI) with logical blocks. This visual approach makes workflows much easier to understand, debug, and modify, even long after you’ve created them. For me, especially for complex or evolving processes, this visual clarity is a huge plus.

The best part? You can install it right on your own hardware or servers, keeping your data and processes in-house.

Key Functionalities of N8n

N8n packs a punch when it comes to features:

  • Visual Workflow Builder: Create complex automation sequences graphically using a web-based GUI. Drag, drop, and connect nodes to define your logic.
  • Extensive Integrations: It boasts a vast library of pre-built integrations for countless applications and services (think Google Suite, Microsoft tools, databases, communication platforms, and much more).
  • Customizable Nodes: If a pre-built integration doesn’t exist, you can create custom nodes, for example, to execute your own Python code within a workflow.
  • AI Agent Integration: This is where it gets really exciting for us! N8n now includes dedicated modules (built using Langchain) to seamlessly integrate AI models, including self-hosted ones like those managed by Ollama.
  • Data Manipulation: N8n isn’t just about triggering actions. It allows you to transform, filter, merge, split, and enrich data as it flows through your workflow, enabling sophisticated data processing.
  • Strong Community & Templates: Starting from scratch can be daunting. N8n has a fantastic community that shares workflow templates. These are invaluable for learning and getting started quickly.

Getting Started: Installation with Docker

My preferred method for running N8n, especially for testing and home use, is using Docker and Docker Compose. It’s clean, contained, and easy to manage. While you *can* install it using npm, Docker keeps things tidy.

  1. Use Docker Compose: I started with the official Docker Compose setup provided on the N8n GitHub repository. This typically includes N8n itself and a Postgres database for backend storage (though SQLite is built-in for simpler setups).
  2. Configure Environment: Modify the .env file to set up database credentials and any other necessary parameters.
  3. Launch: Run docker-compose up -d to start the containers.
  4. Access: You should then be able to access the N8n web interface, usually at http://localhost:5678. You’ll need to create an initial user account.
  5. Connect AI (Optional but Recommended): Have your Ollama instance running if you plan to use local Large Language Models (LLMs).

N8n in Action: Some Examples

Let’s look at a few examples I demonstrated in the video to give you a feel for how N8n works:

Example 1: The AI Calculator

This was a simple workflow designed to show the basic AI Agent block.

  • It takes a mathematical question (e.g., “4 plus 5”).
  • Uses the AI Agent node configured with an Ollama model (like Mistral) and Postgres for memory (to remember conversation context).
  • The “tool” in this case is a simple calculator function.
  • The AI understands the request, uses the tool to get the result (9), and then formulates a natural language response (“The answer is 9”).
  • The execution log is fantastic here, showing step-by-step how the input flows through chat memory, the LLM, the tool, and back to the LLM for the final output.

Example 2: AI Web Agent with SERP API

This workflow demonstrated fetching external data and using AI to process it:

  • It used the SERP API tool (requiring an API key) to perform a web search (e.g., “latest news about Zabbix”).
  • The search results were passed to the first AI Agent (using Ollama) for initial processing/summarization.
  • Crucially, I showed how to pass the output of one node as input to the next using N8n’s expression syntax ({{ $json.output }} or similar).
  • A second AI Agent node was added with a specific prompt: “You are a very good AI agent specialized in blog writing.” This agent took the summarized web content and structured it into a blog post format.

Example 3: Simple Web Scraper

This showed basic web scraping without external APIs:

  • Used the built-in HTTP Request node to fetch content from specific web pages.
  • Applied filtering and data manipulation nodes to limit the number of pages and extract relevant text content (cleaning HTML).
  • Passed the cleaned text to Ollama for summarization.
  • The visual execution flow clearly showed each step turning green as it completed successfully.

I also briefly mentioned a much more complex potential workflow involving document processing (PDFs, text files), using Quadrant as a vector database, and Mistral for creating embeddings to build a Retrieval-Augmented Generation (RAG) system – showcasing the scalability of N8n.

Conclusion: Your Automation Powerhouse

N8n is a remarkably powerful and flexible tool for anyone looking to automate tasks, whether simple or complex. Its visual approach makes automation accessible, while its deep integration capabilities, including first-class support for AI models via tools like Ollama, open up a world of possibilities.

Being open-source and self-hostable gives you complete control over your workflows and data. Whether you’re automating IT processes, integrating marketing tools, processing data, or experimenting with AI, N8n provides a robust platform to build upon.

What do you think? Have you tried N8n or other workflow automation tools? What kind of tasks would you love to automate using AI?

Let me know your thoughts, suggestions, and experiences in the comments below! Your feedback is incredibly valuable.

If you found this useful, please consider sharing it and subscribing to my YouTube channel, Quadrata, for more content on open source and IT.

Thanks for reading, and see you in the next one!

– Dimitri Bellini

Read More
Automate Your Zabbix Reporting with Scheduled Reports: A Step-by-Step Guide

Automate Your Zabbix Reporting with Scheduled Reports: A Step-by-Step Guide

Hey everyone, Dimitri Bellini here from Quadrata, your go-to channel for open source and IT insights! It’s fantastic to have you back with me. If you’re enjoying the content and haven’t subscribed yet, now’s a great time to hit that button and help me bring you even more valuable videos. 😉

Today, we’re diving deep into a Zabbix feature that’s been around for a while but is now truly shining – Scheduled Reports. Recently, I’ve been getting a lot of questions about this from clients, and it made me realize it’s time to shed light on this often-overlooked functionality. So, let’s talk about automating those PDF reports from your Zabbix dashboards.

Why Scheduled Reports? The Power of Automated Insights

Scheduled reports might not be brand new to Zabbix (they’ve been around since version 5.2!), but honestly, I wasn’t completely sold on them until recently. In older versions, they felt a bit… incomplete. But with Zabbix 7 and especially 7.2, things have changed dramatically. Now, in my opinion, scheduled reports are becoming a genuinely useful tool.

What are we talking about exactly? Essentially, scheduled reports are a way to automatically generate PDFs of your Zabbix dashboards and have them emailed to stakeholders – think bosses, team leads, or anyone who needs a regular overview without logging into Zabbix directly. We all know that stakeholder, right? The one who wants to see a “green is good” PDF report every Monday morning (or Friday afternoon!). While dashboards are great for real-time monitoring, scheduled reports offer that convenient, digestible summary for those who need a quick status update.

Sure, everyone *could* log into Zabbix and check the dashboards themselves. But let’s be real, sometimes pushing the information directly to them in a clean, professional PDF format is just more efficient and impactful. And that’s where Zabbix Scheduled Reports come in!

Key Features of Zabbix Scheduled Reports

Let’s break down the main advantages of using scheduled reports in Zabbix:

    • Automation: Define parameters to automatically send specific dashboards on a schedule (daily, weekly, monthly) to designated users.
    • Customization: Leverage your existing Zabbix dashboards. The reports are generated directly from the dashboards you design with widgets.
    • PDF Format: Reports are generated in PDF, the universally readable and versatile format.
    • Access Control: Control who can create and manage scheduled reports using user roles and permissions within Zabbix (Admin and Super Admin roles with specific flags).

For more detailed information, I highly recommend checking out the official Zabbix documentation and the Zabbix blog post about scheduled reports. I’ll include links in the description below for your convenience!

Setting Up Zabbix Scheduled Reports: A Step-by-Step Guide

Ready to get started? Here’s how to set up scheduled reports in Zabbix. Keep in mind, this guide is based on a simplified installation for demonstration purposes. For production environments, always refer to the official Zabbix documentation for best practices and advanced configurations.

Prerequisites

Before we begin, make sure you have the following:

    • A running Zabbix server (version 7.0 or higher recommended, 7.2+ for the best experience).
    • Configured dashboards in Zabbix that you want to use for reports.
    • Email media type configured in Zabbix for sending reports.

Installation of Zabbix Web Service and Google Chrome

The magic behind Zabbix scheduled reports relies on a separate component: Zabbix Web Service. This service handles the PDF generation and needs to be installed separately. It also uses Google Chrome (or Chromium) in headless mode to take screenshots of your dashboards and convert them to PDF.

Here’s how to install them on a Red Hat-based system (like Rocky Linux) using YUM/DNF:

    1. Install Zabbix Web Service:
      sudo yum install zabbix-web-service

      Make sure you have the official Zabbix repository configured.

    1. Install Google Chrome Stable:
      sudo yum install google-chrome-stable

      This will install Google Chrome and its dependencies. Be aware that Chrome can pull in quite a few dependencies, which is why installing the web service on a separate, smaller machine can be a good idea for cleaner Zabbix server environments.

Configuring Zabbix Server

Next, we need to configure the Zabbix server to enable scheduled reports and point it to the web service.

    1. Edit the Zabbix Server Configuration File:
      sudo vi /etc/zabbix/zabbix_server.conf
    1. Modify the following parameters:
        • StartReportWriters=1 (Change from 0 to 1 or more, depending on your reporting needs. Start with 1 for testing.)
        • WebServiceURL="http://localhost:10053/report" (Adjust the IP address and port if your web service is running on a different machine or port. 10053 is the default port for Zabbix Web Service).
    1. Restart Zabbix Server:
      sudo systemctl restart zabbix-server
    1. Start Zabbix Web Service:
      sudo systemctl start zabbix-web-service
    1. Enable Zabbix Web Service to start on boot:
      sudo systemctl enable zabbix-web-service

Configuring Zabbix Frontend

One last crucial configuration step in the Zabbix web interface!

    1. Navigate to Administration -> General -> GUI.
    1. Modify “Frontend URL”: Set this to the full URL of your Zabbix frontend (e.g., http://your_zabbix_server_ip/zabbix). This is essential for Chrome to access the dashboards correctly for PDF generation.
    1. Click “Update”.

Creating a Scheduled Report

Now for the fun part – creating your first scheduled report!

    1. Go to Reports -> Scheduled reports.
    1. Click “Create scheduled report”.
    1. Configure the report:
        • Name: Give your report a descriptive name (e.g., “Weekly Server Health Report”).
        • Dashboard: Select the dashboard you want to use for the report.
        • Period: Choose the time period for the report data (e.g., “Previous week”).
        • Schedule: Define the frequency (daily, weekly, monthly), time, and start/end dates for report generation.
        • Recipients: Add users or user groups who should receive the report via email. Make sure they have email media configured!
        • Generated report by: Choose if the report should be generated based on the permissions of the “Current user” (the admin creating the report) or the “Recipient” of the report.
        • Message: Customize the email message that accompanies the report (you can use Zabbix macros here).
    1. Click “Add”.

Testing and Troubleshooting

To test your setup, you can use the “Test” button next to your newly created scheduled report. If you encounter issues, double-check:

    • Email media configuration for recipients.
    • Zabbix Web Service and Google Chrome installation.
    • Zabbix server and web service configuration files.
    • Frontend URL setting.
    • Permissions: In the video, I encountered a permission issue related to the /var/lib/zabbix directory. You might need to create this directory and ensure the Zabbix user has write permissions if you face similar errors. sudo mkdir /var/lib/zabbix && sudo chown zabbix:zabbix /var/lib/zabbix

Why Zabbix 7.x Makes a Difference

I really started to appreciate scheduled reports with Zabbix 7.0 and 7.2. Why? Because these versions brought significant improvements:

    • Multi-page Reports: Finally, reports can span multiple pages, making them much more comprehensive.
    • Enhanced Dashboard Widgets: Zabbix 7.x introduced richer widgets like Top Hosts, Top Items, Pie charts, and Donut charts. These make dashboards (and therefore reports) far more visually appealing and informative.
    • Custom Widgets: With the ability to create custom widgets, you can tailor your dashboards and reports to very specific needs.

These enhancements make scheduled reports in Zabbix 7.x and above a truly valuable tool for delivering insightful and professional monitoring summaries.

Conclusion

Zabbix Scheduled Reports are a fantastic way to automate the delivery of key monitoring insights to stakeholders. While they’ve been around for a while, the improvements in Zabbix 7.x have made them significantly more powerful and user-friendly. Give them a try, experiment with your dashboards, and start delivering automated, professional PDF reports today!

I hope you found this guide helpful! If you did, please give this post a thumbs up (or share!) and let me know in the comments if you have any questions or experiences with Zabbix Scheduled Reports. Don’t forget to subscribe to Quadrata for more open source and IT tips and tricks.

And if you’re in the Zabbix community, be sure to join the ZabbixItalia Telegram channel – a great place to connect with other Zabbix users and get your questions answered. A big thank you for watching, and I’ll see you in the next video!

Bye from Dimitri!

P.S. Keep exploring Zabbix – there’s always something new and cool to discover!


Keywords: Zabbix, Scheduled Reports, PDF Reports, Automation, Dashboards, Monitoring, IT Reporting, Zabbix Web Service, Google Chrome, Tutorial, Guide, Dimitri Bellini, Quadrata, Zabbix 7.2, Zabbix 7.0, Open Source, IT Infrastructure, System Monitoring

Read More