Skip to content
    November 3, 2020

    Aiming at DataOps? Better Start with the Basics

    As organizations pursue their digital transformation ambitions, data is at the heart of innovation, powering everything from customer experience, to sales, people, and finance. But here is the problem: Many IT organizations still devote up to 80% of their time getting the data rather than using the data.

    With artificial intelligence becoming a growing enterprise priority, data preparation and sourcing is expected to become an even more significant pain point, impeding a team’s ability to train machine learning models properly. Cumbersome data sourcing and management do not fit into our new fast-changing business environment. In other words, your aptitude to harness an ever-growing volume, variety, and velocity of data is a significant determinant of your capacity to drive innovation and future growth.

    What is DataOps?

    To better leverage the benefits of artificial intelligence, it is necessary to integrate analytics pipelines with DevOps and Agile practices, where building, testing, provisioning, and deployment are all run as automated processes. Emerging disciplines—such as DataOps—incorporate agile approaches to minimize the cycle time of analytics development.

    Big data projects are either too big or too complex to handle the traditional way. You probably won’t have access to a proper staging environment and will only have limited time and scale for qualification. In other words, big data is implicitly promoting DevOps, because there is no real possibility to separate operations from development if you’re ultimately discovering the relevance of your algorithms in production.

    Through DataOps, teams aim to orchestrate environments, tools, models, and data from an end-to-end perspective. Together with data scientists, operations teams can improve the data management lifecycle and enable a fully efficient data pipeline. However, to put DataOps to work, your organization first needs to address some specific organizational and automation challenges.

    Bridge Technology and Functional Silos

    When you look at the vast number of new projects aimed at leveraging the value of existing data, you quickly realize that many companies run big data environments in near-total isolation from the rest of enterprise business processes. Collaboration is essential. But running big data as a silo prevents you from integrating data-driven environments into your enterprise DevOps initiatives. Teams, tools, and infrastructures all need to be coordinated. And for that, you need automation that goes beyond silos.

    Remove Manual Handoffs and Establish End-to-End Process Automation

    Point automation tools may be reliable and sufficiently scalable for low-volume, simple task scheduling. However, as workload volume or complexity increases, the exponential growth in manual effort needed to cope with this situation reveals its shortcomings.

    Manual handoffs and ad-hoc scripting that connect tools and teams create delays, bottlenecks, and errors. Typical pipelines deal with extremely large volumes of data. Missing one step in the process, or executing a step at the wrong time, can result in a significant amount of wasted resources—or, in the worst-case scenario, inconsistent data. By establishing end-to-end automation of data pipelines, you reduce the potential for human errors, improve data quality, and accelerate data flows. Ultimately, you speed up the time to value from your data.

    Offer Self-Service Capabilities

    Often, data scientists and other non-IT stakeholders find data flows too complex to design and manage themselves. As a result, they routinely call upon extended support from the IT experts. Given this, many IT organizations have difficulties in scaling with the volume of data, or the number of data sources, without slowing down development cycle times.

    In order to hide the complexities of big data, while providing users with the data accessibility and insights they want, IT teams need to offer a catalog of automated services. By reducing the delays in teams getting access and processing data, innovation and customer experience are significantly improved.

    Integrate Analytics with Other Enterprise Business Processes

    As you work to do more and more with big data, it’s only natural to expect your end-to-end business workflows to include an increasingly intricate blend of big data and traditional application jobs. Usual data tools fall short in this regard, as you need to manage a mix of data movements and data processing. Big data schedulers only automate specific tasks on specific technologies. As a result, you create more siloed islands of automation and risk losing control of the end-to-end data pipeline.

    You ultimately need to capture, store, and process data as it arrives and distribute it to downstream applications, often in near real-time. Here, holistic automation can help your organization to improve the integration between modern data flows, and traditional enterprise business processes.

    Establish a High Level of Process Standardization and Governance

    Data pipelines reach across your company, your business partners, your supply chain, your SaaS offerings, and more. It’s a complex and diverse IT and business landscape. Lack of visibility and control can also be the source of regulatory headaches. Every country, state, and industry has compliance measures that require detailed logs and audit trails of content encryption, access, authorization, and usage. By implementing end-to-end automation, you can have a single, centralized point of control for data movements and policy management. Further, you can quickly demonstrate your methodology and compliance to mitigate risks and avoid auditing headaches.


    Data science and analytics have become a critical asset for any enterprise. Now, the business is calling out for timely and consistent data to support innovation and growth. Your challenge is to accelerate and streamline that data delivery, so you can more quickly transform source data into valuable insights.

    As you consider DataOps as a possible game changer in your data management practice, it is important to recognize that it has to be properly applied within your organization, like any new approach. For that, you need mature processes that can help industrialize data pipelines, so you can increase agility, reduce cycle times, and build a new level of trust in the use of artificial intelligence technologies. Therefore, you need to go back to the basics: By orchestrating tools, teams, infrastructure, and data, your organization can be uniquely positioned to acquire more value from data. That’s quite an incentive to start reviewing automation strategies and modernizing data pipelines.

    Yann Guernion

    Yann has several decades of experience in the software industry, from development to operations to marketing of enterprise solutions. He helps Broadcom deliver market-leading solutions with a focus on Automation, DevOps, and Big Data.

    Other posts you might be interested in

    Explore the Catalog
    March 22, 2023

    Leadership Superpower: Succeeding Sustainably

    Read More
    March 21, 2023

    IT Operations in 2023: Mapping IT to Business Goals

    Read More
    March 20, 2023

    2023 VSM Summit: Delivering Secrets to Transformation Success

    Read More
    March 16, 2023

    Better Together: Optimize Test Data Management for Distributed Applications with Service Virtualization

    Read More
    March 9, 2023

    Maximizing Efficiency: How SOAP Can Transform Your Enterprise Automation Strategy

    Read More
    March 1, 2023

    Leadership Superpower: Embracing Lessons from Women’s History

    Read More
    February 27, 2023

    Q&A: Healthcare Providers Treat Business Transformation with Clarity

    Read More
    February 24, 2023

    18 Best Practices for Cloud Automation

    Read More
    February 22, 2023

    Value Streams and Symphonies

    Read More