top of page
Search

When Data Pipelines Fail: How Your Network Becomes Your Superpower

  • Writer: Christian Steinert
    Christian Steinert
  • Sep 30
  • 5 min read

A healthcare data consultant’s Friday morning crisis—and the lesson that changed how I think about professional relationships

The Crisis: 10 Failed Pipelines and a Racing Heart

Another crazy week of learning, moving, grooving and experiencing my way through data consulting. I woke up this past Friday to 10 failed data pipeline jobs at one of my healthcare clients.


Seeing those initial failure symbols flipped my stomach over…heartbeat paused for a brief moment. What did I do? I’ve been heads down the last month. Simultaneously building the data warehouse while constructing report queries to automate internal reporting.


ree

For context, these weren’t just numbers on a dashboard—failed pipelines meant our client couldn’t access critical patient flow data, delayed reporting to executives, and potentially impacted care coordination decisions. In healthcare, data delays can ripple through operations in ways that affect real patient outcomes.


Last week’s issue discussed tips for context switching as a consultant. Those have been helpful on this learning curve of daily chaos, mental toughness and diverse skillsets. In the midst of this chaos, I always get concerned that I may have broken something with my development efforts.


I’d been making a lot of changes to the codebase and data pipeline orchestrations. We’re better aligning our data source load times to suit the needs of our internal stakeholders. So clearly these pipeline errors followed a familiar pattern.


The Investigation: When Documentation Fails You

I researched the error, found a subreddit thread that highlighted it within the last few days. No solution from anyone though. Keep in mind, we’re predominantly using Microsoft Fabric right now as this has become the tool of choice in healthcare. There was no official documentation on fixing this issue. I was left puzzled.


The Solution: Bypassing Traditional Support

What’s the obvious first step when encountering an issue like this?


First, research the error and trace back any changes you made since the job last ran successfully. Tweak what you think may be going on and rerun the job.


If still a failure, submit a support ticket to your data vendor.


Contrary to that protocol, I escalated it even more heavily than a support ticket. My impression of larger vendors, and it’s nothing against them, is that support tickets from smaller ticket clients may not be prioritized compared to larger accounts. Understandably so.


Considering Fabric is new, there are constantly new improvements and bug fixes underway as the platform reaches maturity. The amount of open support tickets is crazy.


Although I did end up submitting a support ticket with Microsoft later on, my gut instinct had me default to something else. I posted the issue in two data communities I’m a part of and called our data architect (with connections to Microsoft Fabric’s product team).


I leveraged my own network.


Here’s what I learned about balancing official channels with community networks: both have their place, but when you’re dealing with time-sensitive issues that impact client operations, having multiple pathways to solutions becomes invaluable.


The Breakthrough: Community Power in Action

Here’s the unlock: I was astonished at the power of tapping into my network. The responses I got from all of these communities and our data architect was fast (within 1-3 hours). Thank you to everyone who stepped up to the plate to offer help!


By the early afternoon, I had a meeting booked with the higher ups at Microsoft who handle the data factory in Fabric. We were able to debug successfully on that call and fix all of the pipeline issues. What could have been a multi-day outage was resolved in under 6 hours thanks to community connections.


This wasn’t even standard protocol from Microsoft. They booked a debug meeting with me before I ever submitted a support ticket. Kudos to their customer care team for being so willing to help based on the Fabric / data community as opposed to the standard process. That’s just how connected the communities and people I’m a part of are.


I don’t say this to boast, I say this to encourage. NEVER underestimate the power of your network. Especially in a niche field like the data science community. You’d be surprised at how tight-knit the data community really is.


For those wondering where to start: I’m active in specific Reddit communities like r/MicrosoftFabric and r/dataengineering, a Discord group focused on data engineering, and two Circle data leadership communities. The key is contributing to these communities regularly, not just showing up when you need something (I’m still getting better at this, as LinkedIn and Substack has dominated a lot of my content engagement). Answer questions, share insights, celebrate others’ wins—then when you need help, people know you’re genuinely invested in the community.


If you’re willing to put yourself out there on LinkedIn, Substack and other social media vehicles, the doors open to not only help your career, but enable you to provide a better product/service for your clients!


The Bigger Picture: Trust-Building Through Crisis Management

Reflecting on it the last few days, beyond establishing a strong network for yourself, the real differentiation as a quality data consultant is showing your client how you handle issues like these.


Ethan Aaron (CEO @ Portable) made an incredible LinkedIn post six months ago about this. The beauty of long term client relationships isn’t when everything is going right. It’s when things go wrong. Crisis response becomes your differentiator. Clients don’t just want to know you can build systems—they need to know you can save them when those systems fail.


Last Friday was an opportunity to demonstrate my commitment to my client.


Things went wrong.


I didn’t sit there and point a finger. Instead, it was used as a trust building opportunity to let them know the power of my network, resources and commitment to their success. All of this amplifies trust, especially when you’re in business for long term partnerships.


I play the long game. I founded Steinert Analytics on the core values of

  • Unrivaled Transparency

  • Communication

  • An Egoless Candor

  • A Thirst for Continuous Learning


All of these values are, in my opinion, key ingredients to successful long term partnerships. Not only with clients, but with the people in our network too.


Your Action Plan: Start Building Your Network Today

My encouragement for all of you is to start playing the long game now.


So the next time something goes wrong, you have a fruitful network of professionals you can reach out to for help (just don’t defy the support ticketing process of a software vendor like I did 😄).


That long term mindset paired with genuine connection is one of the biggest ways you can differentiate yourself as a consultant and data professional.


I’m just storytelling this scenario for you as evidence.


This experience reinforced something I’m seeing across healthcare organizations: the most successful data initiatives aren’t just about technology—they’re about relationships.


That’s it for this week’s issue.


See you next week.


Christian Steinert is the founder of Steinert Analytics, helping healthcare & roofing organizations turn data into actionable insights. Subscribe to Rooftop Insights for weekly perspectives on analytics and business intelligence in these industries.


Feel free to book a call with us here or reach out to Christian on LinkedIn. Thank you!


Also - check out our free Healthcare Analytics Playbook email course here.

 
 
 

Comments


bottom of page