At Bobsled, we talk with product leaders every day who are tasked with delivering analytical data to customers. Sometimes, they manage proprietary data products which they sell (think a data provider like CoreLogic) and other times they are building ways for customers to access the data they generate inside their application (think a SaaS app like Stripe.)
In both cases, the challenge remains the same: how do I empower my customers to start working with our data as fast as possible without burying my engineers in hours of work to meet custom requests? And how do I do it while managing competing roadmap priorities and the legal and organizational challenges of selling and sharing data.
In this post we outline why leading data and software companies are investing in auto-fulfillment to address these questions and walk through a few common deployment models we see with our customers.
What is auto-fulfillment?
Auto-fulfillment is a north star for many data product management teams. The vision behind auto-fulfillment is simple: the moment a customer has the right to access a data product, they should be able to start putting it to work. In practice, auto-fulfillment is the process through which data is programmatically entitled, accessed and integrated into a customer environment.
Traditionally, the biggest barrier to auto-fulfillment has centered on the last step in that process: integration. With existing forms of delivery such as file transfers or APIs, engineers can automate the way in which data is entitled or accessed. However, in both cases, customers typically receive a file, which still requires substantial ETL to prepare for analysis.
The good news is that a new generation of data sharing technologies from the major cloud and data platforms have made that dream a possibility over the past few years. Every major data platform now offers some form of in-place data sharing protocol (e.g. Snowflake Sharing, Delta Sharing.) With in-place sharing, providers share access to a data product – not the product itself. That means there’s no data to extract, files to load, or fields to transform.
The bad news is that supporting these protocols effectively shifts the ETL requirement to providers. In order to support “native” sharing, providers have to move their data into the same platform (and region) as their customers. This not only creates substantial complexity for engineering teams; it limits the ability for providers to actually deliver the promise of Auto-Fulfillment. Certain destinations are not supported, provisioning takes days, and key features are left out.
How Bobsled enables auto-fulfillment
Bobsled brings auto-fulfillment within reach for product and engineering teams by offering a simple service to share data to any data lake or platform. With Bobsled, engineering teams no longer have to manage the permissioning, replication, and orchestration of data across multiple platforms in order to support data sharing. Instead, they connect Bobsled to their data lake or warehouse once and instantly are able to deliver ready-to-query, natively customers in every region of every platform.
Every company is different. Some have legal, compliance or business requirements that limit what they can – or want to – automate. With the Bobsled app and suite of APIs, companies can get started quickly and build toward more embedded and automated fulfillment options over time.
Back-office auto-fulfillment
For your customers, the ability to receive data via a native share in their platform is a massive leap forward from API or SFTP delivery. That’s why most teams start using Bobsled by initiating and managing shares via the app. Product teams can start offering sharing to customers in every platform and region immediately; and engineering teams get to vet Bobsled through a simple UI.
The process is simple: pick the files they want to deliver in your source, configure how the product is shared, select the customer’s platform, region and account and then initiate a delivery. Within seconds, a customer gets access to ready-to-query data natively within their platform of choice – whether that’s a table Snowflake or a file in AWS S3.
Embedded auto-fulfillment
Once a team has met the initial customer demand, the next step is to streamline the process by embedding fulfillment into their existing onboarding process. These investments are all about freeing up engineering time and removing the bottleneck that slows down delivery.
We see two common patterns. First, a team might have an existing internal application they have built to manage their existing process with a platform like ReTool. Using the Bobsled API, they can build a “fulfillment” module as part of the onboarding workflow so that internal users never have to leave the existing app.
Second, we see teams integrating Bobsled into their CRM. Platforms like Salesforce and Hubspot often serve as a system of record for business teams. Using the Bobsled API, sales teams can automatically initiate a delivery from within the customer record in the CRM.
Self-serve auto-fulfillment
For many product teams, the end goal is to allow customers to subscribe and access data products automatically from an existing software app. This is often the case for companies that offer an existing software experience – everything from a SaaS company that’s sharing application data to a customer or a data provider that has a catalog or marketplace customers use to browse their products.
The most advanced teams use the Bobsled API to build fully self-serve data sharing experiences within applications. Once authenticated, a customer can pick the products they want to access, specify the platform, region and account they want the data to go to, and press share. Bobsled handles the rest and customers instantly receive their data without any human involvement.
Learn more about how Bobsled powers fulfillment for leading data and software companies like ZoomInfo, CoreLogic and CARTO by exploring our docs or setting up a demo.
By clicking download you're confirming that you agree with our Terms and Conditions.