Crontab/Linux

10 Crontab Mistakes That Cost Developers Hours (And How to Fix Them)

LmaDev
LmaDev
12 minutes read

10 Crontab Mistakes That Cost Developers Hours (And How to Fix Them)

CronMonitor.app - 10 Common Crontab Mistakes

Published: December 23, 2025 Category: Tutorial Author: DevOps Team Read Time: 10 minutes

We've all seen it: a cron job works perfectly when run manually, but then fails when you add the script to crontab. Or, worse yet, it works perfectly for weeks, until you show up at work on Monday and no report has been generated since Saturday.

[I have prepared a list of the 10 most common mistakes made when configuring crontab tasks and how to avoid them.]()

1. Missing or not configured environment variable with the path to the script

The Problem:

# ❌ This works in terminal but fails in cron
0 2 * * * backup.sh
0 3 * * * python cleanup.py

When you run commands manually, your shell loads .bashrc or .zshrc with a full PATH. Cron doesn't do this - it runs with a minimal environment, typically just /usr/bin:/bin.

The Fix:

# ✅ Option 1: Use absolute paths
0 2 * * * /usr/local/bin/backup.sh
0 3 * * * /usr/bin/python3 /home/user/cleanup.py

# ✅ Option 2: Set PATH at the top of crontab
PATH=/usr/local/bin:/usr/bin:/bin:/home/user/scripts
0 2 * * * backup.sh
0 3 * * * python cleanup.py

Pro tip: Run which command in your terminal to find the absolute path, then use that in crontab.

2. Not Handling Output (Disk Fills Up)

The Problem:

# ❌ Sends email for EVERY execution
* * * * * /usr/local/bin/health-check.sh

By default, cron emails all output. If you have a job running every minute, that's 1,440 emails per day. Many systems fail to send these emails, so they pile up in mail queues, filling your disk.

The Fix:

# ✅ Redirect output to log file
* * * * * /usr/local/bin/health-check.sh >> /var/log/health-check.log 2>&1

# ✅ Discard output if not needed
* * * * * /usr/local/bin/health-check.sh > /dev/null 2>&1

# ✅ Log only errors
* * * * * /usr/local/bin/health-check.sh > /dev/null 2>> /var/log/health-check-errors.log

What does 2>&1 mean?

  • 2 = stderr (error output)
  • >&1 = redirect to stdout
  • >> file = append to file
  • > file = overwrite file

3. Incorrect time zone

The Problem:

You schedule a backup for 2 AM your local time, but the server is in a different timezone. The backup runs at 2 AM UTC instead - which might be 10 PM or 6 AM your time.

# ❌ What time is this really?
0 2 * * * /backup.sh

The Fix:

# ✅ Check server timezone first
$ timedatectl
# or
$ date

# ✅ Set timezone at top of crontab (some cron implementations)
CRON_TZ=America/New_York
0 2 * * * /backup.sh

# ✅ Or specify in the command
0 2 * * * TZ=America/New_York /backup.sh

Better approach: Always use UTC for server operations and document the actual execution time:

# Runs at 2 AM UTC (10 PM EST)
0 2 * * * /backup.sh

4. Not Escaping Special Characters

The Problem:

The percent sign % has special meaning in crontab - it marks the start of stdin data. This breaks many date commands:

# ❌ This breaks
0 0 * * * date +%Y-%m-%d > /tmp/date.txt

# ❌ This also breaks
0 0 * * * echo "Backup from $(date +%Y-%m-%d)" | mail -s "Status" [email protected]

The Fix:

# ✅ Escape the percent signs
0 0 * * * date +\%Y-\%m-\%d > /tmp/date.txt

# ✅ Or use a wrapper script
0 0 * * * /usr/local/bin/backup-with-date.sh

backup-with-date.sh:

#!/bin/bash
DATE=$(date +%Y-%m-%d)
echo "Backup from $DATE" | mail -s "Backup Status" [email protected]

5. Testing in the Wrong Environment

The Problem:

Your script works perfectly when you run it from your terminal but fails in cron. This happens because:

  • Different user (cron might run as different user)
  • Different environment variables
  • Different working directory
  • Different permissions
# ❌ Works in terminal, fails in cron
0 2 * * * cd /app && ./deploy.sh

The Fix:

# ✅ Test exactly as cron would run it
$ env -i sh -c 'your-command'

# ✅ Or even better - test with cron's environment
$ env -i HOME=/home/user PATH=/usr/bin:/bin sh -c 'your-command'

# ✅ In the script itself, set everything explicitly
0 2 * * * /bin/bash -l -c 'cd /app && ./deploy.sh'

Pro debugging technique:

Add this temporary cron job to capture the environment:

* * * * * env > /tmp/cron-env.txt

Compare it with your terminal environment:

$ env > /tmp/terminal-env.txt
$ diff /tmp/cron-env.txt /tmp/terminal-env.txt

6. No Error Handling in Scripts

The Problem:

Your cron job runs a script with multiple commands. If one fails, the rest continue anyway, leaving your system in an inconsistent state.

# ❌ Script continues even if step 1 fails
#!/bin/bash
pg_dump mydb > backup.sql
gzip backup.sql
s3cmd put backup.sql.gz s3://backups/
rm backup.sql.gz

If pg_dump fails (database unreachable), you'll compress an empty file and upload it, deleting the old backup. Disaster!

The Fix:

# ✅ Exit immediately on any error
#!/bin/bash
set -e  # Exit on error
set -u  # Exit on undefined variable
set -o pipefail  # Catch errors in pipes

pg_dump mydb > backup.sql
gzip backup.sql
s3cmd put backup.sql.gz s3://backups/
rm backup.sql.gz

# ✅ Or check each step explicitly
#!/bin/bash
pg_dump mydb > backup.sql
if [ $? -ne 0 ]; then
    echo "Database dump failed!" | mail -s "BACKUP FAILED" [email protected]
    exit 1
fi

gzip backup.sql
s3cmd put backup.sql.gz s3://backups/ || exit 1
rm backup.sql.gz

7. Overlapping Job Executions

The Problem:

When executing a complex, long-running task, a process may terminate due to an error or fail to complete before its next scheduled execution. In this situation, two or more processes of the same task will be running on the system.

# ❌ This can run multiple times simultaneously
*/5 * * * * /usr/local/bin/process-queue.sh

If process-queue.sh takes 7 minutes but runs every 5 minutes, you'll have overlapping executions.

The Fix:

# ✅ Use flock to prevent overlaps
*/5 * * * * /usr/bin/flock -n /tmp/process-queue.lock /usr/local/bin/process-queue.sh

# ✅ Or use a PID file
*/5 * * * * /usr/local/bin/process-queue.sh --with-pidfile

process-queue.sh with PID file:

#!/bin/bash
PIDFILE=/tmp/process-queue.pid

if [ -f $PIDFILE ]; then
    PID=$(cat $PIDFILE)
    if ps -p $PID > /dev/null 2>&1; then
        echo "Already running (PID: $PID)"
        exit 0
    fi
fi

echo $$ > $PIDFILE
trap "rm -f $PIDFILE" EXIT

# Your actual work here
process_queue_items

Using flock (simpler):

# -n = non-blocking (exit immediately if locked)
# -x = exclusive lock
# -w 5 = wait up to 5 seconds for lock
*/5 * * * * /usr/bin/flock -n /tmp/mylock.lock -c '/usr/local/bin/myscript.sh'

8. Not Logging Properly

The Problem:

When a cron job fails, you have no idea what went wrong because there are no logs

# ❌ No visibility into what happened
0 2 * * * /usr/local/bin/backup.sh > /dev/null 2>&1

The Fix:

# ✅ Log everything with timestamps
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

# ✅ Better - use logger to send to syslog
0 2 * * * /usr/local/bin/backup.sh 2>&1 | logger -t backup

# ✅ Even better - structured logging in the script
#!/bin/bash
LOG_FILE="/var/log/backup.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> $LOG_FILE
}

log "Starting backup..."
pg_dump mydb > /tmp/backup.sql && log "Database dumped" || log "ERROR: Dump failed"
log "Backup completed"

Pro tip: Set up log rotation so logs don't fill your disk:

# /etc/logrotate.d/backup
/var/log/backup.log {
    daily
    rotate 30
    compress
    delaycompress
    notifempty
    create 0644 root root
}

9. Hardcoded Credentials

The Problem:

Never store passwords or other secrets directly in a crontab or script.

# ❌ NEVER do this
0 2 * * * mysqldump -u root -pMyP@ssw0rd! mydb > backup.sql
0 3 * * * curl -u admin:secretpass https://api.example.com/backup

Anyone with read access to crontab can see your credentials. Plus, they might leak in process listings (ps aux).

The Fix:

# ✅ Use MySQL config file
# ~/.my.cnf
[client]
user=root
password=MyP@ssw0rd!

# Crontab (no password visible)
0 2 * * * mysqldump mydb > backup.sql

# ✅ Use environment variables
# /etc/environment or ~/.profile
export API_TOKEN=your-secret-token

# Crontab
0 3 * * * curl -H "Authorization: Bearer $API_TOKEN" https://api.example.com/backup

# ✅ Use credential files with restricted permissions
0 2 * * * /usr/local/bin/backup.sh

backup.sh:

#!/bin/bash
# Load credentials from secure file (chmod 600)
source /etc/backup/credentials.conf

curl -u "$API_USER:$API_PASS" https://api.example.com/backup
$ chmod 600 /etc/backup/credentials.conf
$ chown root:root /etc/backup/credentials.conf

10. No Monitoring = Silent Failures

The Problem:

This is the worst mistake: assuming your cron jobs are running correctly. You won't know they failed until it's too late.

Real-world scenarios:

  • Backup script fails for 2 weeks → database crashes → no recent backup
  • SSL renewal cron fails → certificates expire → site goes down
  • Data sync stops → reports show stale data → business decisions based on wrong info
# ❌ Fire and forget
0 2 * * * /usr/local/bin/critical-backup.sh > /dev/null 2>&1

The Fix:

Add monitoring with health check pings:

# ✅ Ping on success
0 2 * * * /usr/local/bin/critical-backup.sh && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

# ✅ Report both success and failure
0 2 * * * /usr/local/bin/critical-backup.sh && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

With CronMonitor.app, you get:

  • Instant alerts (Slack/Discord/Email) when jobs don't ping on time
  • Dashboard showing all your cron jobs' health
  • Execution history and duration tracking
  • Grace periods for jobs with variable runtime

Example with proper monitoring:

# Database backup - critical!
0 2 * * * /usr/local/bin/db-backup.sh >> /var/log/backup.log 2>&1 && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

# SSL renewal - must not fail
0 0 1 * * certbot renew && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

# Report generation - business critical
0 6 * * 1 /usr/local/bin/weekly-report.sh && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

Bonus: The Complete Crontab Template

Here's a production-ready crontab that avoids all these mistakes:

# Environment
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
[email protected]
HOME=/home/user

# Cron job format:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12)
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0)
# |  |  |  |  |
# *  *  *  *  *  command

# Database backup - daily at 2 AM
0 2 * * * /usr/bin/flock -n /tmp/backup.lock bash -c '/usr/local/bin/backup.sh >> /var/log/backup.log 2>&1 && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa'

# Cleanup old logs - weekly Sunday 3 AM
0 3 * * 0 find /var/log/app -name "*.log" -mtime +30 -delete && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

# Process queue - every 5 minutes (with overlap protection)
*/5 * * * * /usr/bin/flock -n /tmp/queue.lock /usr/local/bin/process-queue.sh >> /var/log/queue.log 2>&1

# Health check - every minute (silent)
* * * * * /usr/local/bin/health-check.sh > /dev/null 2>&1 && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

# Monthly report - 1st day of month at 6 AM
0 6 1 * * /usr/local/bin/monthly-report.sh && curl -fsS https://cronmonitor.app/ping/d10bd467df2dac08ad3d3bdc208199d3c0cf6cdsa

Quick Debugging Checklist

When a cron job isn't working:

  • [ ] Check cron is running: systemctl status cron
  • [ ] Verify crontab syntax: crontab -l
  • [ ] Test with absolute paths: which your-command
  • [ ] Check file permissions: ls -la /path/to/script
  • [ ] Verify script is executable: chmod +x script.sh
  • [ ] Test in cron's environment: env -i sh -c 'your-command'
  • [ ] Check logs: /var/log/syslog or journalctl -u cron
  • [ ] Verify timezone: date vs TZ=UTC date
  • [ ] Look for overlapping jobs: ps aux | grep your-script
  • [ ] Review mail queue: mailq

Summary

Most cron issues stem from environment differences between your terminal and cron's execution context. The golden rules:

  1. Always use absolute paths
  2. Log everything (but rotate logs)
  3. Handle errors explicitly (set -e)
  4. Prevent overlaps (use flock)
  5. Monitor actively (don't assume it works)
  6. Test in cron's environment before deploying
  7. Never hardcode credentials
  8. Document your cron jobs

The biggest mistake? Assuming cron jobs "just work" once scheduled. Add monitoring to catch failures before they become disasters.

Share this article