Recent News in Analytics and AI: April 2026 Edition

5th May 2026 . By Michael A

The signals in April 2026 were less about novelty and more about confidence. Platform providers focused on making advanced capabilities dependable, governable and ready for scale, while industry discussions increasingly centred on managing AI agents responsibly. This edition covers key updates across Power BI, Microsoft Fabric, Copilot, Databricks, open‑source projects and the wider AI ecosystem, helping you separate incremental improvements from genuinely strategic shifts that will shape how organisations use analytics and AI in practice.

Read on and get up to speed.

Power BI


  • April 2026’s Power BI update leans into polish and personalisation. In‑report Copilot on mobile now supports a full chat experience with citations, making on‑the‑go analysis practical. Reporting gets modern visual defaults, a base theme switcher, and new Canvas settings > Size presets so pages can be set to common resolutions such as HD, Full HD, QHD and 4K while still allowing custom dimensions. Modelling previews include user context aware calculated columns that can react to UserCulture(), UserPrincipalName() and CustomData() via the Expression Context property, enabling scenarios like translations that change per viewer. The net effect is faster design standardisation and more tailored, scalable semantic models. Learn more.

  • After eight years at the heart of many Power BI solutions, Dataflows Gen1 is officially placed into a legacy state, with all future innovation moving to Dataflows Gen2. Gen2 retains the familiar Power Query experience while delivering significant gains in scale, performance, reliability and cost efficiency through deep integration with Microsoft Fabric. Existing Gen1 workloads remain supported, giving teams time to plan rather than forcing immediate migration. For organisations already using Fabric capacity, Gen2 becomes the recommended path for new and evolving workloads. The announcement is important because it clearly signals where Microsoft’s data integration investments are heading and how customers should future‑proof their architectures. Learn more.

  • Translytical task flows reaching general availability signals that Power BI can now support production‑grade, action‑oriented analytics. Organisations can confidently design reports that not only explain what is happening but also enable users to act immediately within governed Fabric workflows. However, GA comes with practical caveats. Deployment pipelines do not yet support translytical task flows, so report promotion between environments involves manual rebinding of data function buttons. This limits end‑to‑end automation for now and should be factored into CI/CD designs. Even so, the feature represents a major shift, turning Power BI into a credible execution layer for operational analytics. Learn more.



Microsoft Fabric


  • April’s Fabric release is defined by platform hardening rather than headline new UI features. Tabbed multitasking and Object Explorer move to general availability, signalling stability and long term support. The update also deepens developer workflows with VS Code based management of Fabric items and preview Maven support for environments. Data Science teams benefit from cross workspace ML experiment management and updated Semantic Link APIs, while the Data Warehouse now supports transactional schema changes and native JSONL ingestion. Real Time Intelligence continues to mature with Eventhouse MCP integration and preview capabilities for managing Activator rules directly in Eventstreams, reinforcing Fabric’s readiness for real world operational analytics. Learn more.

  • Associated identities for Fabric items address a long standing enterprise risk where critical assets depend on an individual user’s credentials. Lakehouses and Eventstreams can now run under an associated identity such as a service principal or managed identity, rather than the original owner. This removes failure scenarios caused by staff changes or expired accounts. Identity assignment and updates are handled through REST APIs, enabling automation at scale instead of manual remediation. The preview improves reliability, security, and governance by ensuring production assets remain operational regardless of personnel changes, a key requirement for mature Fabric deployments. Learn more.

  • OneLake File Explorer is now generally available, bringing OneLake directly into Windows File Explorer. Users can browse Fabric workspaces and drag local files such as CSVs or Excel sheets straight into OneLake using familiar desktop actions. Once uploaded, files are immediately available to pipelines, notebooks, and semantic models without extra configuration. This removes the friction of browser uploads or custom scripts, helping data engineers, analysts, and business users work faster. The feature matters because it lowers the barrier to getting data into Fabric, accelerating experimentation while keeping data within a governed, central lake. Learn more.

  • Expanded support for the Model Context Protocol represents a significant shift in how Microsoft Fabric can be accessed and operated by AI systems. MCP provides a standard interface that allows AI agents to discover Fabric capabilities and perform real actions without custom integrations. Microsoft introduces both a local MCP server for developer workflows and a remote MCP server in preview that enables authenticated operations directly in Fabric environments. These interactions run through existing security, permissions, and audit controls. This development is important because it moves Fabric beyond being a platform that AI merely queries, towards one that AI agents can actively operate, enabling more advanced automation and decision driven workflows. Learn more.

  • Microsoft Fabric introduced Capacity Scheduler for Eventhouse in preview, bringing more precise control to real time analytics capacity planning. Eventhouse workloads often follow predictable patterns, such as heavy ingestion during business hours and quieter periods overnight or at weekends. Capacity Scheduler allows teams to define minimum capacity levels in 60 minute blocks across a recurring seven day schedule, rather than relying on a single static baseline. Autoscale continues to handle spikes above the defined minimum. This matters because organisations can reduce wasted spend during low demand periods while still guaranteeing performance when it is business critical, improving both cost efficiency and operational confidence. Learn more.

  • Microsoft Fabric’s agentic application guidance has evolved to focus on operational reality rather than experimentation. The updated reference architecture introduces script driven deployment using Fabric REST APIs, reducing manual setup and making environments more repeatable across development, test and production. A new optional Fabric Data Agent is also introduced, designed to be read only and safely integrated alongside other agents. Treating agent activity as governed, first class data remains central to the approach. These changes matter because they lower the barrier to running agentic systems reliably at scale, while improving governance, observability and consistency in enterprise environments. Learn more.

  • Item Recovery in Microsoft Fabric introduces a long awaited safety net for accidental deletions. Instead of being permanently removed, supported items such as lakehouses, notebooks and pipelines now enter a soft deleted state and can be restored within a configurable retention window of seven to ninety days. A new workspace recycle bin makes it easy to browse, restore or permanently delete items, with permissions and lineage preserved on recovery. This is important for busy collaborative environments, where mistakes are inevitable. Item Recovery reduces downtime, avoids costly rebuilds and allows teams to self serve recovery without waiting for administrator intervention. Learn more.



Microsoft 365 Copilot and Copilot Studio


  • Microsoft rolled out a wide set of Microsoft 365 Copilot updates in April 2026, focusing on richer reasoning, better grounding, and more practical controls for organisations. Key additions include Python support and Plan mode in Excel, image editing and public website grounding in PowerPoint, and first-draft generation directly in Outlook’s canvas. Copilot Notebooks gained stronger collaboration and learning features, while the mobile Copilot app received a redesigned, chat-first experience with clearer citations and navigation. For admins, new Copilot Dashboard reporting and billing controls improve visibility. Together, these updates signal a shift from experimentation to everyday, scalable Copilot usage. Learn more.

  • Microsoft introduced real-time voice agents in Copilot Studio, bringing natural, interruptible voice conversations to enterprise contact centres. Unlike traditional IVR systems, these agents listen, reason, and respond in real time, adapting as conversations change and carrying context across hand-offs to human agents. The capability is now generally available and launches with Dynamics 365 Contact Center, building on Copilot Studio’s low-code platform already used by most Fortune 500 companies. This matters because voice remains the dominant customer service channel, and real-time voice agents promise faster resolution, better customer trust, and lower operational costs without sacrificing compliance or governance. Learn more.

  • Microsoft expanded Copilot’s security and governance capabilities to help organisations deploy AI with greater confidence. Updates include enhanced Microsoft Purview Data Loss Prevention to prevent sensitive data from being used in Copilot prompts or web searches, alongside new tools to identify and remediate overshared files at scale. Microsoft also refreshed its Secure and Govern deployment guidance, giving IT teams a clearer starting point for compliant Copilot rollouts. These enhancements matter because they address one of the biggest barriers to AI adoption, trust, by giving admins practical controls without slowing down everyday Copilot usage. Learn more.

  • Microsoft’s agents-plus-workflows approach in Copilot Studio reframes automation as a partnership between intelligence and structure. Rather than choosing between flexible AI agents or rigid workflows, organisations can now orchestrate both within a single platform. Agents handle ambiguity, such as document interpretation or exception handling, while workflows manage repeatable steps and governance. This model matters because it reflects real enterprise needs, where predictability and audit trails are essential. By supporting these patterns natively, Copilot Studio lowers the barrier to deploying AI-powered automation that can scale beyond isolated use cases. Learn more.

  • The latest Copilot Studio update focuses on scaling AI beyond single agents. Multi-agent orchestration is now generally available, making it easier to connect specialised agents across analytics, productivity, and line-of-business systems. Improvements to the Prompt Editor help makers refine behaviour faster, while new governance controls support real-world, regulated deployments. This matters because many organisations struggle not with building agents, but with making them work together reliably. Microsoft’s approach reduces custom integration work and accelerates the transition from pilots to production-grade, agent-driven solutions. Learn more.



Microsoft Foundry


  • Microsoft outlines a full, end‑to‑end workflow for building production‑grade AI agents using Microsoft Agent Framework and Microsoft Foundry. Development starts locally with the Foundry Toolkit for Visual Studio Code, enabling debugging, testing, and multi‑agent composition before deployment. The journey then progresses through stateful memory, shared tooling via Foundry Toolbox, hosted agents for scalable execution, and enterprise observability with tracing and evaluations. This matters because it removes the gap between prototype and production, giving teams a single, coherent stack that supports governance, reliability, and scale without stitching together multiple frameworks and platforms. Learn more.

  • Hosted agents extend Microsoft Foundry with secure, scalable infrastructure designed specifically for agentic workloads. Instead of adapting containers or serverless platforms, each agent runs in a dedicated sandbox with persisted state and built‑in identity. Foundry manages scaling, networking, and observability, while supporting popular frameworks and open protocols. The update matters because enterprise agents often need long‑running execution, filesystem access, and strong isolation. Hosted agents provide these capabilities out of the box, making production deployment more predictable and cost‑efficient. Learn more.

  • Agent Framework 1.0 represents Microsoft’s consolidated approach to AI agent development. Available for Python and .NET, it delivers stable APIs, enterprise‑grade orchestration, and support for multiple model providers. By combining the strengths of Semantic Kernel and AutoGen, the framework simplifies multi‑agent workflows while remaining open and extensible. The release is important because it signals confidence in a single, supported SDK for production workloads, reducing fragmentation and making it easier for teams to standardise how agents are built and maintained. Learn more.

  • Microsoft introduced Toolboxes in Microsoft Foundry, a new way to centrally manage and reuse tools across AI agents. Instead of wiring tools individually into each agent, teams can define a curated toolbox once and expose it through a single endpoint. Foundry handles authentication, credential management, and policy enforcement. This is important because tool integration is a major bottleneck in enterprise agent development. Toolboxes reduce duplication, improve governance, and allow tools to evolve independently of agent code, making large‑scale agent deployments more maintainable. Learn more.

  • Microsoft made Foundry Local generally available, allowing developers to ship fully self‑contained AI applications that run entirely on local hardware. The solution supports multiple languages and platforms, with automatic model optimisation and lifecycle management handled by the runtime. This is significant for regulated industries and edge use cases where data cannot leave the device. By aligning Foundry Local with the broader Microsoft Foundry ecosystem, Microsoft provides a unified approach to building AI solutions that span cloud, on‑premises, and offline environments. Learn more.



Databricks


  • Databricks introduced the next generation of Genie as the primary business user experience across the platform, replacing the former Databricks One interface. Genie now answers questions across multiple data domains, combining governed dashboards, certified Genie Spaces, Databricks Apps and enterprise knowledge from sources such as Google Drive and SharePoint. Rather than relying on fragile text to SQL approaches, Genie reuses trusted business logic already defined by analysts. Agentic reasoning allows Genie to synthesise insights across structured and unstructured data. Mobile support and account level access make governed analytics available anywhere, helping organisations scale trusted self service insights without extra modelling overhead. Learn more.

  • GPT‑5.5 is now fully integrated into Databricks, bringing OpenAI’s strongest frontier model directly to enterprise data workloads. Customers can use GPT‑5.5 for agent building, coding with Codex, document intelligence and natural language data exploration with Genie. Unity AI Gateway governs every interaction, enforcing security policies, monitoring usage and consolidating billing across tools. Automatic failover and observability features improve reliability and transparency at scale. This release matters because it removes the trade off between cutting edge model capability and enterprise governance, enabling organisations to deploy powerful AI systems without copying data or sacrificing control. Learn more.

  • The rise of agent driven software development has created a new problem for enterprises: uncontrolled coding agent sprawl. Databricks introduced Unity AI Gateway features to govern this growth without slowing teams down. A single control plane manages access, monitors MCP calls, enforces cost limits and provides end to end visibility across tools such as Codex, Cursor and Claude Code. This matters because coding agents often require privileged access to sensitive systems. Centralised governance reduces risk, improves accountability and helps organisations adopt AI assisted development at scale with confidence. Learn more.

  • Databricks showed how Document Intelligence and Lakeflow work together to unlock unstructured enterprise knowledge trapped in PDFs, images and office files. Lakeflow Connect ingests documents from systems such as SharePoint and Google Drive directly into the lakehouse with governance applied from the start. Document Intelligence then parses and structures content using AI grounded in enterprise context. This creates trusted, queryable datasets that integrate with analytics and agentic workflows. The approach replaces brittle, siloed document processing tools with a unified, production grade pipeline. It matters because most enterprise knowledge remains inaccessible, limiting the impact of analytics and AI initiatives. Learn more.

  • Apache Iceberg v3 entered public preview on Databricks, bringing new capabilities to the open lakehouse. Row lineage and deletion vectors make change data processing faster and more reliable, while VARIANT improves support for semi structured data. Because these features are standardised, teams gain performance benefits without locking themselves into a single engine. Unity Catalog provides a consistent governance layer across managed and external Iceberg tables. The update is important because it moves open table formats closer to parity with proprietary alternatives, giving organisations confidence to adopt open architectures for large scale, mission critical analytics and AI workloads. Learn more.



Open-Source


  • Delta Lake 4.2.0 reinforces the move towards catalogue managed lakehouses while broadening support beyond Spark. Catalogue tables gain atomic SQL operations and automatic schema and property synchronisation, reducing the risk of partial writes and manual drift. Streaming improves through the experimental Spark V2 connector, offering finer control over offsets and change handling. A new kernel based Flink connector brings consistent transactional semantics to non Spark engines. At the storage layer, support for geospatial data, collations and variant columns expands analytical use cases. Together, these changes make Delta Lake a more reliable, engine agnostic foundation for governed data platforms. Learn more.

  • Segment Anything Model 3 (SAM 3) introduces promptable concept segmentation, allowing users to detect and track all instances of an open vocabulary concept using simple language. Unlike earlier versions, segmentation is exhaustive rather than interactive, making it practical for large image and video collections. The model combines detection, segmentation and tracking with a presence token that reduces false positives for similar concepts. SAM 3 is open sourced and designed as a foundation model that can plug into larger systems. This release is important because it transforms segmentation from a manual tool into a scalable infrastructure component for vision pipelines. Learn more.

  • DuckLake 1.0 marks a production ready milestone for a SQL native lakehouse format that stores metadata in a database rather than object storage files. By inlining small inserts, updates and deletes directly into the catalogue, DuckLake avoids the small file problem that plagues traditional lake formats. Features such as sorted tables, bucket partitioning and Iceberg compatible deletion vectors improve performance and interoperability. The first implementation ships as a DuckDB extension with guaranteed backward compatibility. DuckLake matters because it simplifies metadata management, accelerates common operations and challenges the assumption that lakehouse metadata must live as files. Learn more.

  • Polars presents a clear mental model for schema evolution that replaces guesswork with deliberate configuration. Additive, subtractive, drifting and breaking changes are treated differently, with concrete guidance on which options apply during reads or writes. The post highlights how embedded schemas in Parquet and table formats enable safer evolution, while CSV relies on inference with inherent risks. This matters because schema mismatches are a leading cause of data pipeline failures. By surfacing the exact trade offs and controls, Polars empowers engineers to choose resilience or strictness explicitly, improving long term maintainability and trust in analytical data products. Learn more.

  • Google’s Gemma 4 release positions open models as a first class complement to Gemini. Four model sizes balance capability and deployability, with particular focus on intelligence per parameter. Support for multimodal inputs, agentic workflows and large contexts enables more than chat based use cases. By licensing Gemma 4 under Apache 2.0, Google encourages broad experimentation and commercial adoption. The significance lies in making advanced reasoning and multimodal AI practical on consumer and enterprise hardware, strengthening the open ecosystem while giving developers flexibility to build and run models where data and latency constraints demand local execution. Learn more.



Industry


  • OpenAI sets out “people first” industrial policy ideas for an intelligence age it frames as moving towards superintelligence. The proposals are positioned as early and exploratory rather than a final blueprint, with an explicit invitation for democratic debate and challenge. The central aim is to expand opportunity, share prosperity, and strengthen institutions so advanced AI benefits society broadly, not just a few. OpenAI also commits to practical next steps: gathering feedback via a dedicated email address, funding pilot fellowships and research grants, and convening policy discussions at a new Washington, DC workshop opening in May. Learn more.

  • Anthropic’s Opus 4.7 launch targets the “agentic economy” problem: models that can be trusted with long, multi-step work without constant supervision. The release claims meaningful improvements in software engineering, sustained reasoning, and careful instruction following, including a stronger tendency to verify results rather than guessing. Vision is materially upgraded through higher resolution image inputs, supporting more precise interpretation of charts, screenshots, and dense diagrams. Alongside capability, the announcement foregrounds risk management. Anthropic says Opus 4.7 has reduced cybersecurity capability versus Mythos Preview and is shipped with automated blocks for prohibited or high risk cybersecurity prompts, with a pathway for verified professionals to use it for defensive work. Learn more.

  • Claude Managed Agents entered public beta as an “agent runtime in a box” for teams building on the Claude Platform. Instead of repeatedly re-engineering state management and security for every agent, developers define tasks, tools, and guardrails, and Anthropic runs the agent with production infrastructure. Key features include sandboxed execution, authentication, scoped permissions, and tracing, plus long running sessions that can operate for hours and survive disconnections. A built in orchestration harness chooses when to call tools, manages context, and handles recovery when tools fail. The launch matters because it standardises governance and observability for agents, reducing friction and risk as organisations scale agentic workflows. Learn more.

  • The Council on Foreign Relations (CFR) analysis presents Claude Mythos as a cybersecurity inflection point, citing claims that the model autonomously found thousands of zero day vulnerabilities, including in older systems, and could produce exploit chains that enable full system takeover. The implications are framed as global: critical infrastructure, financial systems, and public services rely on ageing software that is difficult to modernise quickly. The article stresses the widening asymmetry between rapid AI enabled vulnerability discovery and slower human paced patching. It notes Anthropic’s restricted release and Project Glasswing consortium as a responsible defensive approach, yet warns it will cover only a fraction of vulnerable systems. It also argues containment will be hard because advanced capabilities tend to spread through leaks or competitive replication. Learn more.

  • Government funding through the Sovereign AI Fund is directed to Ineffable Intelligence, a new UK anchored company developing algorithms that learn through interaction and experimentation, aiming to discover novel insights rather than reproduce existing knowledge. Led by David Silver, formerly of DeepMind and known for work behind AlphaGo, the firm is presented as pushing reinforcement learning further towards systems that can generate genuinely new solutions. The announcement matters because it ties frontier AI to national industrial strategy: backing homegrown founders, supporting growth and talent in the UK, and pairing capital with access to specialist compute. It also signals an intent to move quickly, describing Sovereign AI as a venture style unit designed to cut through red tape while protecting the UK’s economic and security interests. Learn more.

  • Gartner warns that enterprises are on the brink of widespread AI agent sprawl, predicting the average Fortune 500 organisation will run more than 150,000 AI agents by 2028, up from fewer than 15 in 2025. With only 13% of organisations confident in their governance, uncontrolled growth risks misinformation, oversharing, data loss and rising IT complexity. Rather than blocking tools, which can drive unsafe “shadow AI”, Gartner urges balanced governance that still enables innovation. The firm sets out six practical steps, including formal agent governance, a centralised agent inventory, defined identities and lifecycles, stronger information governance, continuous monitoring, and workforce training to embed responsible AI use. The guidance matters as agentic AI moves from experimentation into core business operations, making governance a board-level concern. Learn more.


This month’s news highlights an important inflection point. Powerful analytics and AI capabilities are becoming easier to deploy, but also harder to manage without deliberate architectural and governance choices. Whether it is planning a move to Fabric, scaling Copilot usage, or experimenting with agents and open models, the focus is shifting towards sustainability and trust. Keeping up with these changes is essential for teams looking to move beyond pilots and turn analytics and AI into dependable business capabilities.

Stay in the Know


Get notified when we post something new by following us on LinkedIn and X.