Mastering Claude Co-Work
Course
Mastering Claude Co-Work
Module 8 of 18

Scheduled Tasks — Your 24/7 Workforce

Design automations that run while you sleep — and build the hardware strategy that makes them reliable.

What you'll learn

Configure a scheduled task with the correct name, prompt, schedule, model, and output folder
Explain the hardware dependency and evaluate whether a dedicated machine is warranted for your use case
Build a complete skill-to-scheduled-task pipeline for a real recurring workflow
Compare Sonnet vs. Opus for scheduled tasks and select the right model for cost vs. quality

The Difference That Changes Everything

In Module 5, you created your first scheduled task. It was a quick win — you picked a schedule from a dropdown, typed a prompt, and confirmed it would run automatically. You experienced the emotional payoff of watching the task history show a completed run you didn't manually trigger.

This module goes deeper. Because there's a meaningful difference between having scheduled tasks and designing scheduled tasks. A task that runs reliably at 6 AM every weekday and delivers consistent, high-quality output is not an accident — it's the product of deliberate architecture choices made before you click save.

Scheduled tasks are not just tasks that run later. They are autonomous work executed without a human present. That distinction changes how you think about every configuration decision.

Complete skill-to-scheduled-task pipeline: Manual Task → Refine → Create Skill → Schedule → Monitor

The Five Schedule Options

Co-Work's scheduling interface offers five built-in frequencies:

  • Hourly — for monitoring tasks, polling workflows, and high-frequency data collection
  • Daily — for morning briefs, daily reports, overnight processing
  • Weekly — for weekly summaries, content calendars, regular reviews
  • Weekdays — for business workflows that should not run on weekends
  • Manual — for on-demand tasks you want pre-configured but not automatically triggered

Paul (Advanced Use Cases) references more granular scheduling such as running a task at specific times or multiple times per day. This behavior may vary by app version — verify in your current installation whether multi-time scheduling is available before building workflows that depend on it.

The Hardware Dependency

This is the constraint that determines everything about how reliable your scheduled tasks are. Two requirements must be met for a scheduled task to execute:

  1. Your computer must be awake (not sleeping, not shut down)
  2. The Co-Work app must be open

If your computer sleeps during a scheduled task time, the task is skipped. The task does not queue and run when the computer wakes up — it simply doesn't run that scheduled cycle. On some configurations, a skipped task will auto-rerun when the computer wakes; verify this behavior in your own environment.

The Hardware Dependency Is Non-Negotiable

Scheduled tasks require the computer to be awake and the Co-Work app to be open. A sleeping laptop means skipped tasks. This is a verified constraint from official documentation — not a bug that will be fixed.

Hardware dependency decision tree: will computer sleep? leads to dedicated machine or sleep prevention strategy

What This Means for Laptop Users

If your only machine is a laptop that you close and carry around, you will miss scheduled tasks regularly. The options:

  • Sleep prevention: Configure your OS to prevent sleep while plugged in. This works for desktop-like setups but degrades battery on portable use.
  • Accept the gaps: For non-critical tasks (weekly summaries, content drafts), missing an occasional run is acceptable.
  • Dedicated machine: The most reliable solution, covered below.

The Dedicated Machine Strategy

Community Practice — Paul & Jack Roberts

Running a dedicated machine for 24/7 scheduled tasks is a community practice cited by Paul (Advanced Use Cases, Co-Work OS) and Jack Roberts. It is not an official Anthropic recommendation. The underlying technical requirement — a machine that stays awake — is verified.

The setup Paul and Jack Roberts describe:

  • A Mac Mini M2 (~$600 at the time of writing) running continuously
  • iCloud Drive set as the Co-Work parent folder, so outputs are accessible from all devices
  • The Mac Mini runs Co-Work and executes all scheduled tasks
  • Your laptop or other devices read the outputs via iCloud sync

The economics make sense if your scheduled tasks are delivering consistent business value. Think of it as the hardware cost of hiring a 24/7 employee — which brings us to how Jack Roberts frames the broader token spend question.

Practitioner Framing — Jack Roberts

"Token as Employee Cost" is a practitioner framing by Jack Roberts. It is not official Anthropic terminology. The underlying calculation — comparing token costs to equivalent human labor costs — is a sound ROI framework used by several practitioners.

When you're deciding whether to run a complex analysis task on Opus every day, the question isn't "is this expensive?" — it's "what would this cost if a human did it?" A daily strategic summary that would cost a consultant $500/hour takes a few dollars in tokens. The ROI math is rarely close.

The Five Configuration Fields

Every scheduled task has five fields that determine whether it succeeds or fails. Treat each one with deliberate care.

Model selection guide: Sonnet for routine tasks, Opus for complex analysis

1. Name

Use a descriptive, verb-led name that tells you exactly what the task does without opening it. "Daily Email Brief" is better than "Email Task". You'll have multiple scheduled tasks eventually — make each one identifiable at a glance in the task list.

2. Prompt (or Skill Reference)

This is the most important decision: reference a skill rather than writing a raw prompt. A skill-based scheduled task gets the benefit of all your skill refinements automatically. A raw prompt doesn't improve — you have to edit the scheduled task directly to change its behavior, which is error-prone. The pattern: create the skill first, then reference it in the task prompt using its trigger name or /skill-name.

3. Schedule

Match the schedule to the actual cadence of the work. If you're consuming a daily briefing every weekday morning, set Weekdays at 6:00 AM. If you check it occasionally, set Weekly or Manual. Over-scheduling wastes tokens and creates a backlog of outputs you never read.

4. Model

Community-tested guidance from Paul and Jack Roberts (not in official scheduling docs — test for your specific task):

  • Sonnet: Routine tasks — email triage, file organization, daily briefings, regular summaries. Faster, lower cost.
  • Opus: Complex analysis — UXR synthesis, financial analysis, strategic summaries that require reasoning. Slower, higher quality, higher cost.

5. Output Folder

Always specify an output folder. A scheduled task that runs but saves output to an unspecified location creates an archaeological problem — you'll spend time hunting for the output. Set a folder like ~/CoWork/Operations/Scheduled-Output/ and confirm it exists before saving the task.

Task History and Troubleshooting

Every scheduled task maintains a run history accessible at Settings → Scheduled Tasks → [task name] → History. Review this regularly — it's the only way to know if your tasks are actually running.

Common issues and their causes:

  • Task skipped: Computer was asleep during the scheduled time. Check sleep settings.
  • Task ran but output is wrong: The skill needs refinement. Update the skill, not the scheduled task prompt.
  • Task ran but connector failed: Re-authenticate the connector in Settings → Connectors.
  • Task never ran: Verify Co-Work app was open at the scheduled time. Check that the task is enabled (not paused).

The Complete Automation Pipeline

At this point, the full pipeline is assembled. You learned each stage across Modules 5 and 6:

  1. Do it manually — run the task in chat, iterate until the output is right
  2. Refine — run it three or more times, improving the prompt each time
  3. Create a skill — codify the quality-controlled workflow as a named, reusable skill
  4. Schedule the skill — reference the skill in a scheduled task with the right model and output folder
  5. Monitor — check task history, refine the underlying skill when output quality drifts

The reason skills come before scheduling is that a skill can be improved in one place and the improvement propagates to every scheduled task that references it. Update the skill → every schedule that calls it gets the improvement automatically. This is the architecture that makes your automation library maintainable over time.

A raw prompt in a scheduled task is a dead end. A skill referenced in a scheduled task is a living document. The difference compounds over months.

Build-Along Exercise

Build One Complete Skill-to-Scheduled-Task Pipeline

Step 1: Choose the workflow. Pick one recurring task: daily email brief, weekly metrics summary, file organization, or content idea generation. This should be a task you already have a working skill for — if not, create the skill first using the Module 6 process.

Step 2: Confirm the skill is ready. Run the skill manually once. Confirm the output is at the quality level you want from an automated run. If not, refine the skill before scheduling it.

Step 3: Create the scheduled task. Navigate to Settings → Scheduled Tasks → Create New. Fill in all five fields: a descriptive name, a prompt that references your skill by name, the appropriate schedule frequency and time, the correct model (Sonnet for routine, Opus for complex), and an explicit output folder.

Step 4: Trigger a manual run. Use the "Run now" option to test the scheduled task immediately. Check the output folder for the result. Verify the output quality matches what you got when running the skill manually.

Success criteria: One complete skill-to-scheduled-task pipeline running. Task visible in history with at least one completed run. Output folder contains at least one output file. Hardware strategy documented.

Knowledge Check
I have created a scheduled task with all five fields configured: name, skill-based prompt, schedule, model, and output folder
I understand the hardware dependency and have a plan for keeping my computer available at scheduled task times
My scheduled task references a skill rather than a raw prompt — so improvements to the skill propagate automatically
I have checked task history and confirmed at least one successful run
I selected the appropriate model (Sonnet for routine tasks, Opus for complex analysis) and understand why this choice matters for cost and quality