Brian Lovin
/
Hacker News

Launch HN: Bracket (YC W22) – Two-Way Sync Between Salesforce and Postgres

Hey HN, I’m Ian, co-founder at Bracket (https://www.usebracket.com) along with Kunal and Vinesh. Bracket makes it easy to set up fast, bidirectional syncs between Salesforce and Postgres.

We have two main use cases: 1) building apps on top of Postgres instead of Salesforce, and 2) replacing existing Salesforce <> Postgres syncs (either Heroku Connect or in-house builds).

Postgres makes a bunch of things easy for developers: building responsive web apps, handling SSO and user access rules, and scaling large datasets like time-series data. But Sales, Customer Success, and Ops teams usually don’t have direct access to Postgres - instead, they rely on Salesforce as a type of database. These teams need up-to-date data on users, orders, and services, and they need to edit this data. As a result, in a lot of organizations there’s a sort of abyss between Postgres and Salesforce.

For example: say you're a car rental company. People rent cars via your web app built on Postgres, but your CX team uses Salesforce to track/update users, cars, and rentals. One of your users calls in to say that they were in an accident and the car is totaled. Your CX team (on the Salesforce side of the abyss) needs to manually update the status of the car - "Unavailable" - and reassign upcoming reservations to other cars. These edits made in Salesforce must sync to Postgres so that the user sees their updated reservation details.

We first came across this syncing problem when we were deep in pivot hell during YC W22. At the time, we were two weeks away from Demo Day and we had been pivoting for five weeks. We felt like failures every morning, and it seemed inevitable that we’d drop out. Then we talked to a founder who told us how hard it is to simply keep Airtable and MongoDB in sync with each other. He tried Zapier, he tried writing custom scripts, all to no avail. At last, we had 1) a technical problem 2) frustrating a smart founder 3) with a big potential market. We started to build, raised a conservative amount of money at Demo Day, and kept our burn extremely low.

After a year, we had a product keeping Airtable, Notion, and Google Sheet tables in sync with larger databases, but it still felt like a stop-gap: companies were often using us to stand up lightweight BI, avoid creating internal tools from scratch, or build quick admin dashboards. Once they felt the limitations of, say, Google Sheets, they moved off Bracket to a more permanent solution. Not only did this shrink the size of the market, but also left us feeling like we were creating a vitamin, not a painkiller.

Then we talked with companies who wanted to sync CRMs - specifically Salesforce - with Postgres. They either had high-maintenance in-house solutions for syncing, or they were using Heroku Connect, which enables two-way syncs between Salesforce and Heroku-hosted Postgres. They couldn’t get rid of Salesforce and they couldn’t allow the two systems to get out of sync, so they were stuck with Heroku Connect.

There are two major problems with Heroku Connect, though: 1) it's super expensive, and 2) it ties you to Heroku Enterprise as a hosting platform. These companies wanted something as reliable as Heroku Connect, but hosting-agnostic and priced competitively. We sensed an opportunity to build something useful here, so we got to it.

Bracket makes it easy (90 seconds of setup) to get a Salesforce object and Postgres table syncing with each other in near-real-time. Using our app, you connect your Salesforce via oauth, connect your Postgres via connection URI (with options for SSL protocols), and either have Bracket generate a Postgres table from scratch or map fields between Salesforce and an existing Postgres table.

Once connected, Bracket can sync two ways or one way at a cadence decided by the user. We offer two sync methods: polling (default) and streaming. Using the polling sync method, changes sync on average every 30-60 seconds. Using streaming, changes sync on average every 10-30 seconds. You can read about how each method works, and the APIs they use, here: https://docs.usebracket.com/polling

We offer a few monthly subscription plans based on the amount of data kept in sync, with a free starter plan. You can try us without a credit card at https://app.usebracket.com/. If you don’t have a Salesforce or Postgres already set up, you can see Bracket in action here: https://www.youtube.com/watch?v=sRkaAa667T0

We’re hoping to build the best two-way syncing tool possible. We’ve got tools like Hubspot and MySQL in beta, and we’d love your feedback on other integrations that would be useful, product experience, and anything else that comes to your mind. It’s all very much appreciated. Thank you!

Daily Digest email

Get the top HN stories in your inbox every day.

atraac

We've built similar thing to sync certain Salesforce objects to our .NET backend but Salesforce has their streaming all built around CometD which uses something I believe an insane person came up with - Bayeux Protocol. It's essentially an HTTP streaming, you shoot a GET call to an API and it starts streaming bytes of data. If it stops or timeouts, you're supposed to shoot another GET to continue streaming bytes, if nothing comes back you timeout and retry.

It's an absolute hell, most community frameworks barely work, there's zero good solutions to error handling, we had our Azure Application Insights constantly red because of timed-out requests that are in fact good, because they just mean there were no events to transmit.

I refuse to believe that multi-billion dollar company that is Salesforce couldn't come up with a RabbitMQ sink or virtually any better solution to that problem, especially that they could gate it behind a subscription most companies would pay for since they spend millions on SF either way...

kunalrgarg

Wow congrats on making it through with CometD and Bayeux. I think salesforce has realized that a lot of their APIs aren’t useful when using non standard approaches. I think the move from SOAP to also including REST was a signal that they’re trying to be more useful in this realm. I definitely agree a RabbitMQ sink would've been so ideal! I’m hoping it’s somewhere on their roadmap to make our lives easier.

wmfiv

I don't have any first hand experience but there is an integration with AWS EventBridge.

https://aws.amazon.com/blogs/compute/building-salesforce-int...

I know even less about Azure, but it looks like Azure Data Factory (?) provides some kind of similar functionality?

https://learn.microsoft.com/en-us/azure/data-factory/connect...

robertlagrant

From memory, one reasonably good solution I saw was with a company called Validic that did IoT integration stuff for medical devices. They got you to set up an HTTP SSE stream, that you just connected to and consumed from as and when events came through.

kunalrgarg

I haven’t heard of Valadic, but I’ll definitely check it out. Thanks for flagging!

salesforcequit

I tried to build this a few years ago using Salesforce’s webhooks and discovered that low-value tenants are on shared infrastructure where things like scheduled jobs can run instantly… or 10 minutes later, and Salesforce make no guarantees about when things will happen — making any attempt to use Salesforce as a source of truth unreliable.

The polling solution is neat but I am imagining it’ll run into issues with API limits and performance, especially for tenants on shared infrastructure: have you encountered that? Are you limited to working with customers that are paying Salesforce enough to have fast/reliable infrastructure?

(I don’t think it’s a problem for your success as your product is most valuable to those paying Salesforce for reliability, I’m just curious how you are thinking about the problems I gave up on! Maybe things with Salesforce have changed in the last 5 years.)

kunalrgarg

That's awesome that you set this up, thanks for paving the way! Our users who want real-time syncs and are watchful of their REST API limits typically opt in for our streaming solution (I believe the pub/sub api was a recent addition to their change data capturing APIs). Too much polling definitely has its issues with limits as you mentioned, but we allow our users to set their own frequency to meet their needs.

If I'm understanding correctly, the scheduled jobs refer to the Bulk API (I agree it executes at seemingly random speeds). We only use the bulk api on the initial “seed”, where we write a large amount of data from salesforce to postgres. Otherwise, when it comes to reading/writing data, we stick to the REST API, which we’ve found pretty performant and which Heroku Connect seems to rely on, too: https://devcenter.heroku.com/articles/mapping-configuration-...

> Are you limited to working with customers that are paying Salesforce enough…

Yeah, right now we do require that users are on Salesforce plans that include API access, which are Performance, Developer, Unlimited, and Enterprise (or Professional w/ API add-ons).

couchand

I'm guessing the GP's scheduled jobs are running within Salesforce, probably Apex. I'd note that I've seen inconsistent async processing delays even in EE and UE clients. First of all, I'm pretty sure everyone is on shared infrastructure, and second, the delay is at least in part relative to the amount of recent processing.

kunalrgarg

Yeah, so far, we’ve found that this combination of the three APIs is a happy medium between reliability, simplicity, and API limit consciousness.

teej

Salesforce reps don't even know what Heroku Connect is. I couldn't get someone to sell it to me! I think the idea is great, glad to see someone picking it up.

vinarun

Yeah agreed! We were surprised to hear how deeply bundled Heroku Connect is in their Enterprise plan. Reps not knowing about heroku connect coupled with their abandoned roadmap make us excited to build a better solution

no_wizard

I know this is a late comment, but I have to ask, what do you think the market opportunity is on this?

500 million? Billion? Billion plus?

I'm curious how big a business like this can grow, by estimation, based on someone else's platform like this.

ianyanusko

Good question. Our ultimate goal is to be the go-to for any two-way data syncs.

We think the Salesforce syncing market by itself is 500 million to 1B. If you start including all of the tools we have our eyes on (Hubspot, Monday, ERPs), the market size comfortably gets into the billions.

Going after Heroku Connect makes sense as a starting point, but we've got our sights beyond that.

jonathanpglick

Yeah, I used it quite successfully for a local nonprofit that kept most of their data in Salesforce. The eventually consistent syncing sidestepped a ton of headache for us and we could read straight from PostgreSQL. We just used API calls then writes to PostgreSQL for important changes (registrations, etc) originating from the website.

quickthrower2

As a developer I love the idea. Choosing Postgres over say a GraphQL API is bold, but it makes sense. The customer can then scale up their Postgres instance as they hammer it more and more, and they might already being using this for their app, so developers can add it to their ORM. Or they could wrap this in a microservice.

I also liked the submission recently about a git client that uses SQL. I like the idea that more things can be exposed as SQL (either directly or by syncing with a RDBMS). There is a lot of good tooling around it and despite the S not meaning Standard, the dialects are close enough that it isn't a problem.

ianyanusko

Agreed! We’re big fans of consolidating in Postgres. We’re also hearing some downstream benefits to having things scaled in Postgres (makes reporting easier, allows the data team to use SQL rather than hitting an API, etc) from our users.

motrm

This is very cool! I built a similar tool for a project with the need for two-way data syncing between Salesforce and MySQL via Laravel.

Salesforce objects map quite nicely onto Laravel’s Eloquent models (and booleans work fine! ref @ianyanusko in a sibling thread)

On the Salesforce side we use triggers to send a summary of field changes to Laravel to apply to its MySQL database. These are very cheap in terms of Salesforce limits and consumption!

Changes originating from the Laravel side use Salesforce’s REST API. It’s handy taking the round trip through Salesforce when saving changes as it lets flows/processes run and formulas to do their thing before data is persisted in MySQL.

Syncing data from Salesforce (to seed a database for example) is done via REST too. It works OK.

I considered productising it at one point, but ideas are a dime a dozen, it’s a lot harder to execute well and Bracket is doing exactly that, kudos!

This has been a thoroughly interesting post, and I’ll keep my eye on Bracket. You are however out of budget for my client, we had to achieve syncing on a shoestring ;)

ianyanusko

Nice! Great point about the round trip - we do something similar for formulas and auto-generated fields like `Id`. That's awesome you built this in-house.

> Syncing data from Salesforce (to seed a database for example) is done via REST too. It works OK.

Have you thought about using the Bulk API for seeding? We started relying on that instead of REST, which helped us seed massive DBs much faster / more efficiently.

motrm

REST isn't too painful yet, but it will be in the not too distant future.

As you know, the REST API will deliver a maximum of 2000 records per request, so beyond a certain scale it's not really tenable in terms of speed & consumption of API calls.

So yes, Bulk API is probably going to feature soon.

Cheers Ian!

hhthrowaway1230

Nice i can see the need for this.

> With the polling method, Bracket stores an encrypted copy of your data as an intermediate source of truth. We mostly do this to prevent infinite event loops, but it also helps with merge conflict resolution.

I see you have an on-prem version, but i am still not convinced why you need to store it? Can it not just be stored in an extra table at the client's side?

kunalrgarg

Yes, you’re correct on that! Sorry for the lack of clarity on the docs. The snapshots would be stored in a client side table in these cases. These snapshot tables/collections can either be stored in your own PG or MongoDB.

bambax

Neat! How specific is your solution to Postgres? Could it be ported to another db engine?

(And, how are conflicts resolved? In a huge system with millions of records coming from everywhere it can fast become nightmarish?)

ianyanusko

> How specific is your solution to Postgres? Could it be ported to another db engine?

Our polling approach is relatively database-agnostic. We just need to handle each DB's quirks with our transformers (e.g. dealing with MySQL's lack of BOOL field types).

Streaming is currently Postgres-specific. We're planning on rolling out support for MySQL next, after we've finished our Hubspot integration. Do you have a specific DB in mind?

> (And, how are conflicts resolved? In a huge system with millions of records coming from everywhere it can fast become nightmarish?)

The primary source wins any merge conflicts that happen within a sync period. With polling, it's pretty straightforward: at every poll, we see how each side has changed, and for any record pairings for which there were edits on both sides, we prefer the primary source.

With streaming, we employ a hybrid method, where we only poll when events occur in either Salesforce or Postgres. If at that poll, the same record has been edited on both sides since the previous poll, we still prioritize the primary source (Salesforce). You can read the step-by-step flow here: https://docs.usebracket.com/streaming#the-streaming-sync-met...

ranting-moth

> The primary source wins any merge conflicts that happen within a sync period.

This is a very fancy way of saying that you just drop conflict and pretend they didn't happen. Syncing databases is very, very tricky. Conflicts are a big part of the trickiness.

ianyanusko

Agreed on the trickiness! Our early users largely told us they preferred one source to take precedence in a conflict, and would rather set that general rule than review every conflict manually. But a handful have expressed interest in the latter approach, so it's on our roadmap to build.

undefined

[deleted]

couchand

This looks great! The UI and docs look very nice. Of course long-term reliability is what really matters in this space. I can definitely see incorporating this into client proposals.

A few things I can't immediately see from the docs: do you support subsetting a data source -- only sync records matching criteria? Do you support to/from different instances of the same connector (e.g. Salesforce to Salesforce)? Can you perform any transformations like map over the data, normalize or denormalize tables, etc?

Many clients I can think of this being most useful for would rather host it themselves, is that an option?

One critique: I can't imagine recommending this to a client without SSL support. I'd highly recommend just baking that in to every tier. It would demonstrate that you're serious about keeping your customers' data secure.

ianyanusko

Thank you!

> do you support subsetting a data source -- only sync records matching criteria?

In a one-way sync from Postgres to Salesforce, yes, you can apply filters using a SQL statement, but we’re working on adding this to the Salesforce side as well as two-way syncs. From your perspective, how would you want to set these filters? A SOQL query, or something else?

> Do you support to/from different instances of the same connector (e.g. Salesforce to Salesforce)?

Yup, our infra is data-source agnostic! But Salesforce <> Salesforce is not heavily tested, so there may be some funky behavior with oauth tokens if you’re trying to connect objects from two different instances during onboarding. Curious what use case you have in mind?

> Can you perform any transformations like map over the data, normalize or denormalize tables, etc?

Besides the one-way SQL filter I mention above, we try to make field mapping easy between the sources by automatically transforming when necessary (e.g., transforming a Salesforce picklist to a Postgres integer and vice versa). But we’re working on allowing users to create more detailed field-level transformations in the next few months.

Thanks for the feedback! Totally hear you on SSL, we’ll move that to every tier.

couchand

> From your perspective, how would you want to set these filters? A SOQL query, or something else?

For myself, I want to write SOQL there, though I'd guess many of your target customers will want a point-and-click option. Selecting a list view for that object could be an interesting UX hack that might be worth exploring: there's some Salesforce Labs product that does that.

> Salesforce <> Salesforce is not heavily tested... Curious what use case you have in mind?

I've seen a number of configurations for different purposes, the most common one being sandbox data movements.

Management of multiple production orgs gets complex fast, but there are a few places where a tool like this could find a niche: org migrations come to mind. There's often an interim period where you're two way syncing (even though you'd rather not!) before the org being phased out is done.

Good luck!

ianyanusko

Really helpful on both fronts, thanks for the thoughts. Both the sandbox and migration use cases make a lot of sense.

ianyanusko

Sorry, forgot to respond to one piece:

> Many clients I can think of this being most useful for would rather host it themselves, is that an option?

Right now you can self-host the associated datasets (like the Postgres event log table), but we're still working on allowing you to self-host the entire service. Stay tuned :)

waltbosz

Neat. I once wrote a Salesforce to Oracle sync app. I think the goal was to bulk pull data out of Salesforce for processing in Oracle to avoid Salesforce API costs. This was years ago. It was a fun and aggravating project.

ianyanusko

Nice, I hear you on "fun and aggravating" :)

Sounds similar to the use cases we're seeing, where it's not only easier to process/build on Postgres, but also saves you on the Salesforce API.

_bry-guy

I'm currently on a team tasked with thinning out our Salesforce App Exchange app. Bracket looks very cool.

1. Do you have any plans to release an API rather than utilizing a webapp for defining the sync? 2. Does your Salesforce integration support syncing metadata, including custom metadata? 3. Do you have data on the impact Bracket has on platform events and other Salesforce limits? 4. Can you share any information on pricing?

Thanks!

ianyanusko

1. Yes, on our roadmap for Q1! Getting that request a lot 2. We don't currently sync metadata 3. Our footprint on your Salesforce API depends on whether you're using polling or streaming, and then it depends on the cadence of your syncs or frequency of changes. You can see some data on best/worst case scenarios here: https://docs.usebracket.com/connecting/salesforce_api 4. Yup, we priced based on the amount of data kept in sync. You can see more here: https://www.usebracket.com/pricing

_bry-guy

Great, thank you! Syncing metadata may be a dealbreaker for me, but I'll be thinking of Bracket as we build solutions going forward. Cheers.

njudah

As the founder of what would become Heroku Connect (aka Cloudconnect), this thread warms my heart. Love to see the innovation - good luck!

ianyanusko

Thank you! It means a lot to see your comment here

matchagaucho

Even if this only accomplished one-way directional sync with backup and disaster recovery capabilities, it'd probably find a 10x wider audience.

ianyanusko

Thanks for the comment - are you saying that one-way sync has a 10x wider audience than two-way syncs?

matchagaucho

Without knowing the target market, I'm thinking the taxonomy of "two-way sync" is understood by a very narrow community. ISVs have been using this approach for years. So maybe that's the audience?

If so, then Heroku connect probably does need an alternative.

A business user (non-technical 10x audience) just wants to sleep at night knowing their daily backup was successful and they have recovery capabilities on-demand.

There are unicorn scale companies in this space (such as https://www.owndata.com/). And Salesforce is getting back into the backup game.

ianyanusko

Gotcha, that makes sense. From our perspective, the one-way sync market (backups and ETL/rETL) was already saturated with heavily-capitalized players who were hitting each other hard with marketing dollars. We didn't feel that we had a unique insight with one-way syncs, but did feel like we'd come across something unique with two-way use cases.

So we decided to double down in this smaller, underserved area for now as we try to build something people love, even if it's a bit niche.

That being said, there probably is room in the backup space for challengers against the incumbents - it's a massive market.

Daily Digest email

Get the top HN stories in your inbox every day.

Launch HN: Bracket (YC W22) – Two-Way Sync Between Salesforce and Postgres - Hacker News