How I Use pg_cron and Edge Functions to Automate
How I automate sales, ops, alerts and repetitive tasks with pg_cron and Edge Functions across my SaaS and service businesses.

How I Use pg_cron and Edge Functions to Automate My Business
When people talk about automation, they usually imagine complex no-code flows, expensive tools, or giant enterprise systems. My experience has been the opposite. Some of the most useful automations in my business run on a very simple combination: pg_cron inside Postgres and Edge Functions connected to my product logic.
I use this setup to automate recurring tasks in SaaS projects like Blogfy, Manyfy and OneJobs, but also in traditional businesses like Proflimsa, our cleaning services company, and You Minox, my beard growth brand. The reason I like this stack is simple: it is practical, cheap, fast to deploy, and close to the data.
Instead of building an entire microservice architecture just to send reminders, process queues or update records every hour, I can schedule jobs directly from the database and trigger server-side logic at the edge. That gives me a clean way to automate operations without creating unnecessary complexity.
In this article I want to show you exactly how I think about it, where it has worked for me, what mistakes I made, and how you can use the same approach to automate almost everything in your business.
Why this combination works so well
At a business level, most automation falls into one of these categories:
- Time-based actions: every hour, every day, every Monday, every month.
- Condition-based actions: when a user has not paid, when a lead has not replied, when a booking is about to expire.
- Maintenance jobs: cleanup, syncing, recalculating, archiving, retrying failed tasks.
- Operational alerts: notify the team, notify the client, create internal follow-ups.
pg_cron solves the scheduling part. It lets me run SQL on a schedule directly inside Postgres. Edge Functions solve the execution part when I need business logic, integrations, external APIs, or secure server-side processing.
The result is a simple architecture:
| Layer | What I use it for |
|---|---|
| Postgres + pg_cron | Scheduling, selecting records, queue creation, recurring jobs |
| Edge Functions | Business logic, API calls, emails, WhatsApp, webhooks, retries |
| Application tables | Source of truth, logs, job states, auditability |
What I love about this is that automation becomes part of the product, not an external patch. I do not depend on five different tools just to keep operations moving.
My rule: automate only after manual validation
One mistake I made early on was trying to automate processes that were not stable yet. That always creates noise. If your workflow is still changing every week, automation will only lock bad assumptions into code.
Now I follow a simple rule: first I do it manually, then I document it, then I automate the stable 80%.
For example, in Proflimsa we did not automate every follow-up from day one. First we observed how clients requested quotes, how long they took to respond, which reminders worked, and what information the operations team actually needed. Only after that did I build scheduled reminders and internal alerts.
The same happened with You Minox. Before automating post-purchase flows, I wanted to understand customer behavior: when they ask shipping questions, when they reorder, when they stop responding, and what content reduces support tickets. Once the pattern became clear, automation made sense.
Automation works best when it reinforces a validated process, not when it tries to invent one.
How I structure automations in practice
My preferred pattern is very straightforward:
- Create a table that represents the operational state.
- Use pg_cron to run a query on a schedule.
- That query either updates records or pushes jobs into a queue table.
- An Edge Function processes the queued jobs.
- Every execution is logged.
This gives me control, observability and retry capacity. I try to avoid “magic” automations where things happen and nobody knows why.
Example structure
| Table | Purpose |
|---|---|
| customers | Main business entity |
| orders | Purchases and payment state |
| automation_jobs | Queue of pending tasks |
| automation_logs | Execution history, errors, payloads |
Then I schedule jobs like:
- Every 15 minutes: detect unpaid invoices older than X hours.
- Every hour: find abandoned onboarding flows.
- Every day at 8 AM: generate team summary.
- Every Monday: recalculate rankings, quotas or lead priorities.
- Every month: archive inactive records and clean temporary files.
The database decides what needs to happen. The Edge Function decides how it happens.
What I automate with pg_cron and Edge Functions
1. Lead follow-up and sales reminders
This is one of the highest ROI automations I have implemented.
In service businesses, leads arrive at random times and the team gets busy. Without a system, some leads are answered late, others are forgotten, and some receive inconsistent follow-up. That is lost money.
In Proflimsa, I can schedule a query that checks leads created in the last 24 hours with no assigned follow-up or no status change. If they are still untouched, I create an automation job that triggers an Edge Function to notify the sales or operations team.
I also use similar logic for quote reminders:
- If a quote was sent 24 hours ago and there is no reply, send reminder A.
- If 72 hours pass without response, notify internal staff.
- If the lead has high value, escalate to manual call.
This sounds simple, but simple automation compounds. A lot of revenue leakage comes from basic inconsistency.
2. Subscription recovery and failed payment flows
In SaaS products, failed payments are normal. What matters is how quickly and systematically you handle them.
With pg_cron, I schedule checks for accounts with:
- expired cards,
- failed renewals,
- trial periods ending soon,
- accounts suspended but recoverable.
Then Edge Functions handle the next step: sending email, updating account state, creating internal alerts, or calling a payment provider webhook.
For me, this is much better than relying only on external billing dashboards because I want the recovery logic inside my own system. That way I can adapt the messaging by product, plan, geography or customer segment.
For example, a user in OneJobs may need a different reactivation message than a user in Blogfy. The infrastructure is the same, but the business logic changes.
3. Content and publishing workflows
In content-driven products, there are many repetitive tasks that do not need a human every time.
I use scheduled jobs for things like:
- publishing queued content at specific times,
- updating status from draft to scheduled to published,
- generating summaries or reports,
- cleaning orphaned media records,
- sending notifications after publication.
This is especially useful in products where users expect consistency. If content should go live at 9 AM, it should go live at 9 AM without me or the team babysitting it.
4. Operational alerts for traditional businesses
People often assume automation is only for SaaS, but traditional businesses benefit even more because their processes are usually more manual.
In Proflimsa, recurring operations create many opportunities for automation:
- upcoming service reminders,
- staff assignment alerts,
- follow-up after completed service,
- invoice reminders,
- inactive client reactivation campaigns.
If a recurring cleaning service is due soon and the assignment is incomplete, a scheduled job can detect that and trigger an alert. If a service was completed but no feedback request was sent, another job can handle it. These are not glamorous automations, but they reduce friction and improve execution.
5. Ecommerce post-purchase flows
With You Minox, post-purchase communication matters a lot because customers often have questions about shipping, usage, consistency and expected results.
Using pg_cron and Edge Functions, I can automate flows like:
- send order confirmation follow-up if payment is approved,
- send usage guide after delivery window,
- request review after a defined number of days,
- trigger reorder reminder based on estimated product duration,
- alert support if shipment status is stuck.
This reduces support load and improves customer experience at the same time.
Why I prefer queue-based automation
One lesson from experience: do not make scheduled jobs do too much directly.
At first, I used cron jobs that executed everything in one step: query data, call external APIs, send emails, update multiple tables. It worked until it failed. Then debugging became painful.
Now I prefer a queue model:
- pg_cron identifies eligible records.
- It inserts jobs into automation_jobs.
- Edge Functions process those jobs safely.
- Results are written to logs.
This gives me several benefits:
- Retry logic: failed jobs can be retried without rerunning the whole schedule.
- Rate control: useful when dealing with email, WhatsApp or third-party APIs.
- Audit trail: I know what happened and when.
- Isolation: one bad task does not break the entire automation batch.
In real operations, this matters more than elegance. Reliability beats cleverness.
My basic automation design checklist
Whenever I build a new automation, I go through this checklist:
Business checklist
- Is this process already validated manually?
- What KPI will improve: response time, conversion, retention, collections, support load?
- What should happen if the automation fails?
- Does the team need visibility or approval?
Technical checklist
- What table is the source of truth?
- What condition makes a record eligible?
- Should I update directly or create a queued job?
- How will I prevent duplicate execution?
- Where will I log success and failure?
- How will I retry safely?
This sounds basic, but it prevents many expensive mistakes.
Mistakes I made and what I changed
1. I automated without enough logs
Early on, some automations were “working” until they were not. The issue was not the automation itself, but the lack of visibility. I had no clear event log, no payload history, and no error categorization.
Now every meaningful automation writes logs: job type, entity ID, execution time, status, response, error message. If something breaks, I want to know in minutes, not after a customer complains.
2. I forgot idempotency
This is a classic. If a scheduled job runs twice, or a function retries, can it safely process the same item again?
If the answer is no, you will eventually send duplicate emails, duplicate reminders or duplicate state changes. I learned to build with idempotency in mind: unique job keys, status checks, and guard clauses before executing side effects.
3. I mixed business rules everywhere
At one point, some logic lived in SQL, some in the app, some in server functions, and some in random admin scripts. That becomes hard to maintain.
Now I try to keep responsibilities clear:
- SQL for selection, scheduling and state transitions.
- Edge Functions for business actions and integrations.
- Application UI for visibility and manual override.
4. I automated low-value tasks before high-value leaks
This is more strategic than technical. It is easy to automate fun things that save two minutes a day. It is harder, but more valuable, to automate the parts that recover revenue or reduce operational mistakes.
Today, I prioritize automations in this order:
- Revenue recovery
- Lead response speed
- Operational consistency
- Support reduction
- Reporting and convenience
That order has given me much better business results.
How I decide what to automate first
If you want to apply this in your own business, start by asking:
- What task repeats daily or weekly?
- What task depends on timing?
- What task is often forgotten?
- What task affects money when missed?
- What task follows clear conditions?
The best first automations are usually boring but measurable. Not “AI everything.” Just practical systems that remove inconsistency.
For most businesses, I would start with:
| Priority | Automation | Impact |
|---|---|---|
| 1 | Lead follow-up reminders | Higher conversion and faster response |
| 2 | Failed payment recovery | Revenue retention |
| 3 | Operational alerts | Fewer missed tasks |
| 4 | Post-purchase communication | Lower support load and better retention |
| 5 | Weekly summaries | Better management visibility |
A practical mindset for founders and operators
What changed my perspective is understanding that automation is not just a technical tool. It is an operating system for the business.
When you combine pg_cron and Edge Functions correctly, you create a company that remembers things automatically:
- It remembers who needs follow-up.
- It remembers which payment failed.
- It remembers which task is overdue.
- It remembers when a customer should receive the next message.
- It remembers what the team would otherwise forget.
That is powerful, especially in Latin American businesses where teams often operate with limited resources and a lot of manual coordination. You do not need enterprise software to become operationally strong. You need a reliable system built around your real workflows.
For me, that is why this stack has been so effective. It is not flashy. It is useful. And useful systems are the ones that actually scale.
Final advice if you want to implement this
If I had to summarize my approach in a few points, it would be this:
- Start with one painful recurring process.
- Validate manually before automating.
- Use pg_cron for timing and detection.
- Use Edge Functions for actions and integrations.
- Prefer queues over direct heavy execution.
- Log everything important.
- Design for retries and duplicates.
- Measure business impact, not just technical completion.
I have used this mindset across SaaS products, ecommerce and service operations, and the pattern keeps proving itself. The main benefit is not that I save a few hours. The real benefit is that the business becomes more consistent, faster and less dependent on memory.
And in my experience, consistency is one of the biggest competitive advantages a small business can build.
If a process happens often, follows rules, and affects money or customer experience, it probably deserves automation.
That is exactly how I use pg_cron and Edge Functions: not as trendy infrastructure, but as practical leverage to make the business run better every day.


