Recent News in Analytics and AI: March 2026 Edition

7th April 2026 . By Michael A

March 2026 reinforces a clear direction for analytics and AI: insight alone is no longer enough. Across Power BI, Microsoft Fabric, Copilot, Azure and the wider data ecosystem, the focus is shifting towards execution, governance and trust at scale. Updates this month bring analytics closer to operational workflows, make agent behaviour observable and auditable, and strengthen the semantic foundations that AI systems rely on. From translytical reporting and modernised visuals to agent runtimes, graph‑based reasoning and enterprise DevOps patterns, these announcements reflect a maturing landscape where AI is expected to operate reliably inside real business processes, not just generate impressive demos.

Read on and get up to speed.

Power BI


  • Power BI’s March 2026 update focuses on making analytics more actionable, consistent, and enterprise‑ready. Translytical task flows reach general availability, allowing users to update records and trigger workflows directly from reports, closing the gap between insight and action. Modern visual defaults arrive in preview, introducing Fluent 2 styling, improved spacing, and cleaner defaults that reduce formatting effort. Modelling sees a major step forward with Direct Lake in OneLake becoming generally available, delivering high performance without complex refresh cycles. Together, these updates strengthen Power BI’s role as both a decision and execution surface, particularly for organisations scaling analytics across Fabric. Learn more.

  • Power BI extends its code‑first modelling strategy with TMDL View on the Web, now available in preview. The feature allows semantic models to be viewed and edited as human‑readable code directly in the browser, enabling bulk updates and deeper transparency into model metadata. Developers can modify advanced properties, reuse shared definitions, and experiment safely using preview and edit modes. By removing the dependency on Power BI Desktop, Microsoft makes semantic modelling more accessible across platforms while supporting enterprise development patterns such as version control, collaboration, and automation within Fabric‑based analytics estates. Learn more.

  • Power BI positions semantic layers as the critical foundation for trustworthy enterprise AI. Semantic models standardise business definitions, relationships, and governance, ensuring consistent answers across analytics and AI experiences. Microsoft Fabric IQ extends this semantic layer beyond dashboards, allowing Copilot and other AI tools to reason over trusted business logic rather than raw data. New Power BI capabilities reinforce this approach by making models more open, actionable, and integrated into daily workflows. The message is clear: AI value depends less on model sophistication and more on well‑governed semantics that organisations already trust for decision‑making. Learn more.

  • The modern visual defaults preview refreshes Power BI’s out‑of‑the‑box report experience. Aligned to Fluent 2 design, visuals now feature cleaner spacing, smooth chart lines, dropdown slicers, and improved button styling by default. The canvas expands to 1920×1080 for new pages, creating more room for layouts suited to presentations and widescreen displays. Built‑in style presets and enhanced theme customisation make it easier to maintain consistent branding at scale. This update matters because it significantly reduces formatting overhead while raising the overall visual standard of Power BI reports across organisations. Learn more.



Microsoft Fabric


  • Microsoft brought FabCon and SQLCon together in 2026 to signal a decisive shift towards a fully converged data platform. The event showcased how SQL databases and Microsoft Fabric are being unified under a single architecture that spans transactional, operational and analytical workloads. Central to this vision is the new Database Hub in Fabric, which provides a common control plane across cloud, on‑premises and edge databases. Combined with OneLake, Fabric IQ and Power BI’s semantic models, the platform aims to reduce data fragmentation while making enterprise data more usable for AI. This matters because AI systems increasingly depend on consistent, governed and relationship‑aware data foundations. Learn more.

  • Microsoft positioned Microsoft Fabric as the missing operational layer for agentic AI. Rather than focusing on orchestration alone, the architecture shows how agent decisions, tool usage and outcomes can be continuously captured and analysed as structured data. OneLake provides a shared foundation for operational telemetry, analytics and BI, allowing teams to understand what agents did, why they did it and whether it was safe or effective. The example banking application illustrates patterns that generalise across industries. This is significant because enterprises need explainability and accountability from agents, not just intelligent responses, if agentic systems are to be trusted in live business processes. Learn more.

  • Materialized Lake Views reached general availability, turning a popular preview feature into a production‑ready foundation for data engineering in Microsoft Fabric. These views allow engineers to define medallion‑style transformations declaratively using Spark SQL or PySpark, while Fabric handles orchestration, dependency tracking and refresh logic. The GA release adds broader incremental refresh support, multiple schedules and stronger data quality controls. This significantly reduces the need for custom ETL pipelines and manual optimisation. The announcement matters because it lowers operational overhead while keeping performance predictable as data volumes grow, making Lakehouse architectures easier to build and maintain at scale. Learn more.

  • Microsoft introduced graph‑powered AI reasoning in preview to address a growing enterprise need: AI systems that can explain how they reached an answer. The capability combines large language models with graph‑based reasoning over entities and relationships stored in Microsoft Fabric. Natural language queries are translated into Graph Query Language and executed as deterministic traversals, enabling inspectable reasoning paths. This approach supports graph‑based retrieval augmented generation and aligns with Fabric IQ’s semantic layer. The announcement is significant because it moves AI from opaque text generation towards auditable, relationship‑aware reasoning, which is essential for regulated and high‑trust business scenarios. Learn more.

  • Microsoft strengthened Fabric’s security posture by building data protection directly into the platform rather than relying on bolt‑on controls. Expanded Purview‑powered DLP policies now cover structured data across OneLake, reducing the risk of accidental or malicious exposure as data moves between workloads. Sensitivity labels can be discovered and applied using public APIs, supporting automation and integration with enterprise governance processes. The significance of this update lies in its timing. As organisations operationalise AI at scale, consistent and automatic data protection becomes essential. Fabric’s approach balances strong security enforcement with the flexibility required for modern analytics and AI scenarios. Learn more.

  • Microsoft expanded the Fabric CLI with v1.5, making it a production‑ready tool for DevOps, automation and AI‑assisted operations. A new deploy command simplifies CI/CD by enabling full workspace deployments from a single instruction. Enhanced Power BI support and interactive REPL mode improve day‑to‑day management, while pre‑installation in Fabric notebooks turns notebooks into a remote execution surface. The addition of an AI agent execution layer reflects Microsoft’s broader agentic direction. This release is important because it reduces reliance on manual portal work and brings Fabric closer to established developer workflows used across modern data platforms. Learn more.



Microsoft 365 Copilot and Copilot Studio


  • Microsoft introduced multi‑model intelligence to Researcher in Microsoft 365 Copilot, raising the quality bar for workplace research. Two capabilities sit at the core. Critique separates generation from evaluation by using one model to research and draft content, while another independently reviews accuracy, structure and sourcing. Council presents side‑by‑side outputs from multiple models, highlighting areas of agreement and divergence to support informed judgement. Internal benchmarking shows measurable gains in research accuracy and completeness compared with single‑model approaches. This matters because knowledge workers can now trust AI‑generated research for higher‑stakes decisions without leaving their everyday Microsoft 365 workflow. Learn more.

  • Microsoft announced upcoming price changes to Microsoft 365 and urged organisations to secure Copilot Business bundles before 1 July 2026. Purchasing or renewing ahead of this date allows customers to lock in current pricing and access time‑limited promotional discounts running until 30 June. Copilot Business is positioned for small and medium‑sized organisations, embedding AI directly into familiar Microsoft 365 apps with built‑in security and admin controls. The announcement matters for budget planning, as early commitment can deliver meaningful savings while enabling teams to adopt Copilot capabilities without future cost uncertainty. Learn more.

  • The growth of Copilot connectors introduces a more complete data ecosystem for Microsoft 365 users by securely linking external systems into Copilot’s reasoning environment. With the library now exceeding 100 connectors, teams gain the ability to search, analyse and act on information previously siloed in line of business platforms. This is important because meaningful AI assistance relies on high quality context, and connectors make that context accessible. The expansion also offers developers more flexibility to build tailored integrations. Overall, the update represents a major step towards enterprise wide AI, helping organisations streamline operations by placing broader data intelligence directly inside their everyday tools. Learn more.

  • Microsoft announced new capabilities that allow Copilot agents to surface live app experiences directly inside Copilot chat. Employees can now complete real actions, such as scheduling meetings or updating CRM records, without switching applications. Alongside this productivity gain, Microsoft introduced improved agent discovery and ongoing evaluation tools to help IT teams manage risk, usage and quality. The update is important because it closes the gap between AI insight and execution while reinforcing governance. Organisations can scale agent usage with greater confidence, knowing that controls, visibility and behaviour monitoring remain firmly in place. Learn more.

  • Microsoft announced a set of Copilot Studio updates aimed at helping organisations build and run agents with confidence at scale. Enhanced agent evaluations turn expectations into measurable checks, enabling teams to test quality, catch regressions and review behaviour using real scenarios. Improvements to computer‑using agents expand automation across live systems, while new Agent Academy training paths support makers building more complex solutions. This matters because many organisations struggle to move beyond pilot agents. These updates focus on trust, repeatability and skills development, addressing the practical barriers to production‑ready agent deployments. Learn more.



Azure


  • Microsoft has taken Foundry Agent Service to general availability, shifting AI agents firmly into enterprise‑ready territory. The GA release focuses on the operational gaps that typically block production deployments, including private networking with bring‑your‑own VNets, expanded identity options through Entra and managed identities, and built‑in evaluations with continuous monitoring. The service is now based on the OpenAI Responses API, making migration from existing agent implementations straightforward. Voice Live enters preview, enabling real‑time speech‑to‑speech agents. The combination of security, observability and open model support makes Foundry a credible runtime for mission‑critical agentic systems. Learn more.

  • The Foundry Citadel Platform provides a reference architecture for enterprise AI governance, addressing the growing risk of unmanaged “shadow AI”. Built as an opinionated, layered design, Citadel separates governance from execution through a hub‑and‑spoke model. A central Governance Hub enforces identity, policy, cost controls and observability, while isolated Agent Spokes run workloads securely. The platform includes pre‑built infrastructure‑as‑code templates, shared registries and managed guardrails to accelerate deployment. Rather than a one‑size‑fits‑all product, Citadel offers adaptable patterns that help organisations scale AI safely while meeting regulatory and security expectations. Learn more.

  • At FabCon 2026, Azure Databricks announced updates that push the lakehouse closer to everyday, operational use. Lakebase reached general availability as a low‑latency database designed for AI agents, while Genie and Genie Code extend natural language and agentic workflows across data engineering and analytics. Lakeflow Connect introduced a free tier, enabling ingestion of up to 100 million records per workspace per day under unified governance. Databricks One Mobile expands access further by bringing dashboards, AI and Genie insights to mobile devices, supporting on‑the‑go decision‑making. Together, these changes make governed data and AI more accessible, operational and embedded in daily workflows. Learn more.

  • Microsoft has introduced a Database DevOps preview in SQL Server Management Studio 22.4.1, bringing modern DevOps practices directly into familiar SQL tooling. The preview integrates Git‑based source control and schema comparison workflows into SSMS, enabling database changes to be versioned, reviewed and deployed more consistently. By reducing reliance on manual scripts and ad‑hoc processes, the feature helps teams improve reliability and auditability across database changes. This matters because database development has often lagged application DevOps maturity. Embedding these capabilities in SSMS lowers the adoption barrier and supports more predictable, repeatable database delivery in enterprise environments. Learn more.

  • Databricks has entered the cybersecurity market with Lakewatch, an open, agentic SIEM designed for the scale and speed of AI‑driven attacks. Built on the lakehouse architecture, Lakewatch unifies security, IT and business telemetry in open formats, allowing organisations to retain and analyse far more data at significantly lower cost. AI agents automate detection, investigation and response, shifting security operations towards machine‑speed defence. This matters as traditional SIEMs struggle with ingestion costs and manual workflows. By decoupling storage from compute and avoiding vendor lock‑in, Lakewatch reframes security as a data problem that modern platforms are better equipped to solve. Learn more.



Open-Source


  • DuckDB released duckdb-skills, a Claude Code plugin that turns local data exploration into a set of reusable “slash command” skills backed by DuckDB. Core commands attach databases, run SQL or natural language queries, and read many file formats locally or from cloud storage. A shared, append-only state.sql file persists ATTACH, LOAD, secrets, and macros so later sessions restore context automatically, and there is a skill to search DuckDB and DuckLake documentation via hosted indexes. This matters because it standardises repeatable analysis workflows inside coding sessions, reducing setup friction and improving continuity across investigations. Learn more.

  • DuckDB benchmarked Apple’s entry-level MacBook Neo (8 GB RAM, A18 Pro) to test “big data on your laptop” claims, using ClickBench and TPC-DS. In ClickBench cold runs, the laptop finished all 43 queries in under a minute, beating cloud instances largely because local NVMe avoided network-attached disk bottlenecks. Hot runs flipped the story: a large Graviton instance dominated, but the Neo remained close to a mid-sized cloud machine despite far less RAM. For TPC-DS, it completed SF100 in 15.5 minutes and SF300 in 79 minutes via disk spilling. The takeaway is pragmatic: it works, but storage and memory limits matter. Learn more.

  • Daft packaged its UDF guidance into a single notebook that compares four patterns side by side on one dataset: row-wise transforms, generator UDFs for one-to-many outputs, async UDFs for concurrent I/O, and stateful class-based UDFs that amortise expensive setup like model loading. Each section starts from a practical problem, shows runnable code, and explains when to choose that pattern. The key message is ergonomics without sacrificing scale: the same decorated Python runs locally and can move to a Ray cluster with minimal change. This matters for teams building data and AI pipelines because UDFs are where performance traps and operational complexity often start, and Daft is trying to make those choices explicit and repeatable. Learn more.

  • Meta introduced TRIBE v2, a predictive foundation model designed to simulate how the human brain responds to sight, sound and language. The model combines pretrained audio, video and text embeddings with a transformer that learns shared representations across stimuli, before mapping them to brain activity measured via fMRI. TRIBE v2 significantly scales both resolution and coverage, predicting activity across around 70,000 brain voxels and generalising to new stimuli and individuals without retraining. This matters because it turns months of neuroscience experiments into seconds of computation, enabling in-silico studies that can accelerate brain research, inform AI system design, and support future advances in healthcare and neurological understanding. Learn more.

  • MCP introduced extensions as an additive way to evolve the ecosystem without destabilising the core protocol. The post frames MCP in three layers: the core specification for baseline interoperability, supporting projects like registries and inspectors, and optional extensions for specialised capabilities. Extensions are negotiated during initialisation and safely ignored by clients or servers that do not support them, so the baseline still works. Concrete patterns include UI-oriented extensions such as MCP Apps, and authorisation extensions that build on OAuth to enable machine-to-machine flows or enterprise-managed identity. Domain-specific conventions are also emerging for regulated verticals. The key point is governance: extensions can start as community experiments and, if they gain traction through the Specification Enhancement Proposal (SEP) process, become official. This matters for teams adopting MCP because it provides a path to richer functionality without waiting for slow-moving core spec changes. Learn more.



Industry


  • NVIDIA introduced NemoClaw, an open stack for the OpenClaw agent platform designed to make always on AI assistants safer and easier to deploy. NemoClaw installs NVIDIA Nemotron models and the new OpenShell runtime in a single command, aiming to add policy based privacy, security and network guardrails around autonomous agents. OpenShell provides an isolated sandbox so agents can access tools and data while still enforcing permissions. The stack supports a hybrid setup, running open models locally and routing to frontier cloud models via a privacy router. This matters because enterprises want agent productivity without losing control of data and behaviour. Learn more.

  • V‑JEPA 2.1 is a new way for AI to learn from videos without needing human labels. Instead of being told what it is looking at, the system learns by predicting missing parts of images and video frames, helping it build a richer understanding of what is happening over time. This approach allows the AI to notice important details, such as how objects move and interact, rather than just recognising whole scenes. The research shows this leads to better performance in tasks like understanding human actions and helping robots pick up objects. This matters because it reduces the cost and effort of training AI, while making systems more adaptable to the real world. Learn more.

  • Perplexity positions its new “Computer” concept as a move from AI that merely advises to AI that actively completes work. Early internal versions acted as a digital worker inside Slack, breaking down tasks, creating files, using tools, and running for long periods with minimal supervision. The central insight is that no single AI model excels at everything, as models are increasingly specialised. Instead, strong results come from orchestrating multiple models together. By combining web access, a file system, secure execution, and long‑term memory, Perplexity frames Computer as a full working environment rather than a chatbot. This shift matters because it reframes AI products around outcomes and execution, not just conversation. Learn more.

  • Anthropic offers a decision framework for picking agent workflow structures that balance reliability, cost and speed. Sequential workflows are recommended when step B depends on step A, including data transforms and content pipelines, because they make failures easier to observe and can boost accuracy, albeit with slower end to end latency. Parallel workflows are positioned for independent work, such as running several evaluators or reviewers simultaneously, but they need an aggregation method before implementation or they produce unresolved contradictions. Evaluator optimiser workflows are recommended when first draft quality consistently misses explicit standards, pairing a generator with an evaluator in a bounded loop. The piece repeatedly warns to start with the simplest workable pattern and only add extra agents when there is a measurable gain. This matters for teams moving from prototypes to production, where orchestration mistakes become cost and governance problems. Learn more.

  • DeepMind’s research focuses on protecting people from AI systems that could influence decisions in deceptive or harmful ways. As conversational AI becomes more natural, the concern shifts from simple misinformation to subtle manipulation of beliefs and behaviour. To address this, DeepMind developed and tested a new evaluation approach using real human studies across multiple countries and sensitive domains like finance and health. The findings show that an AI’s ability to manipulate varies significantly depending on context, and that models are most manipulative when explicitly instructed to be. By publishing its methods and integrating them into its Frontier Safety Framework, DeepMind is setting a foundation for consistent testing and oversight. This is important because it enables regulators, developers and organisations to assess persuasion risks systematically rather than relying on assumptions. Learn more.


Across all these developments, the March updates show analytics and AI converging into a single operational fabric. Semantic models, governed data platforms and agent observability are emerging as the real differentiators, enabling organisations to scale AI safely while maintaining consistency and control. The recurring message is pragmatic rather than speculative: production matters more than prototypes, and trust matters more than novelty. For analytics and AI leaders, the opportunity now lies in connecting these capabilities into coherent architectures that support decision‑making, automation and accountability end to end. The technologies are increasingly ready; the challenge is designing operating models that allow them to deliver sustained business value.

Stay in the Know


Get notified when we post something new by following us on LinkedIn and X.