Categories

General & Philosophy

What is this framework?
The Sequifi Delivery Framework is our standard operating procedure for building and delivering software. It covers the complete workflow from idea inception through production release and ongoing support, with clear handoffs between Business, Product, UX/UI, Development, and Support teams.
Why do we need a formal framework?
Three main reasons: Predictability - everyone knows when things will ship; Quality - clear checkpoints prevent bugs from slipping through; Fast response - structured escalation for production issues without derailing sprint work.
How is this different from what we do now?
Key additions include: formal Definition of Ready/Done gates, 7-day UAT regression testing, design running one sprint ahead, structured support handoff documentation, and mandatory RCA for every P0 incident.
Isn't this just adding bureaucracy?
No. The philosophy is "process enables speed, not replaces judgment." We're adding structure where it prevents problems - like quality gates that catch issues before production. We explicitly right-size everything: a bug fix doesn't need a 4-hour PRD session.
What does "right-size everything" mean?
Match process rigor to complexity. Large new features get full PRDs, design sessions, and comprehensive testing. Small bug fixes just need a good ticket. Enhancement features get lightweight PRDs. We don't apply heavyweight process to lightweight work.
What does "no big upfront design" mean?
We don't try to plan precisely what we'll build 2-3 months from now. We focus on the next two sprints and land them well. Beyond that, we have direction and roadmap items, but not detailed specs. This keeps us adaptable.
What's the commitment horizon?
We commit to the current sprint. We plan the next sprint. Beyond that is intentionally flexible. This allows us to adapt to changing priorities, new information, and urgent client needs.
Why "speed over precise landing dates"?
Precise dates can't be achieved at scale without "big upfront design" which conflicts with agility. Instead, we optimize for delivering value quickly and iteratively. When we show consistent speed, trust builds and focus shifts from "you missed the date" to "how do we get more value."
What does "product is a whole, not pieces" mean?
Customers only benefit from the complete, working product - not from individual components. A feature in staging isn't value delivered. Teams aren't done when their part is done; they're done when value is shipped and customers can use it.
Is this framework mandatory?
Yes, this is the standard for all teams. Consistency enables cross-team coordination and predictability. That said, we iterate on the framework itself - if something isn't working, we change it through retrospectives.
What version is this framework?
This is version 1.0. We expect to iterate and improve it based on experience. If something creates friction without adding value, we'll change it.

Backlog & Prioritization

Where does the backlog live?
The product backlog is maintained in Index by the Product Owner.
How do I submit a new feature request?
Submit requests through Index or communicate with the Product Owner. Include: clear title, business justification (why is this needed?), expected outcomes (what does success look like?), and priority rationale (why this priority level?).
How quickly will my request be reviewed?
The Product Owner triages new submissions weekly. Nothing sits unreviewed for more than one week. You'll receive feedback on whether the item was accepted, needs more information, or was deprioritized.
How is priority determined?
Priority considers: business/strategic importance, client impact (how many affected, how severe), regulatory or deadline requirements, and dependencies. The Product Owner makes the call with stakeholder input during refinement sessions.
What are the priority levels for backlog items?
Critical: Business-critical, blocking clients, regulatory (current/next sprint). High: Important for business goals, significant client impact (next 2-3 sprints). Medium: Nice to have, improves experience (next quarter). Low: Future consideration, minor improvements (roadmap/icebox).
Can I change priority after submission?
Yes. Priorities are validated during bi-weekly refinement sessions. If business circumstances change, raise it with the Product Owner. Priorities aren't set in stone.
What happens in backlog refinement?
Bi-weekly sessions (1-2 hours) where Product Owner, leads, and stakeholders review top 10-15 items. We clarify requirements, define acceptance criteria, validate priorities, and identify dependencies. Items leave ready for design or sprint.
Does every item need to go through refinement?
No. Well-understood bugs and small tasks can skip refinement and go directly to sprint. Refinement is for items that need clarification or have complexity.
How much refined work should be ready?
We target 2-3 sprints worth of items in "Ready" state at all times. This ensures teams never run out of refined work and can plan ahead.
What happens to items that sit in backlog too long?
Items with no updates for 3+ months are flagged as stale. We target less than 10% stale items. Stale items are reviewed and either reprioritized, clarified, or closed.

Design & PRD

When does design work start?
After an item is refined and marked "Ready for Design." Design runs one sprint ahead of development - while dev builds Sprint N, design works on Sprint N+1.
Does every feature need new designs?
No. Bug fixes, backend changes, and minor UI tweaks can reference existing patterns. New designs are for new features, significant UX changes, or new screens.
What does a complete design include?
High-fidelity mockups for all states (default, error, empty, loading), responsive breakpoints (desktop, tablet, mobile), interaction specifications, exported assets, and a walkthrough with the dev team.
What if designs aren't ready in time for sprint planning?
The feature doesn't enter the sprint. We don't start development with incomplete designs. The feature waits for the next sprint when designs are complete.
Where are designs stored?
All designs are created and maintained in Figma.
When is a PRD needed?
Full PRD: Large new features. Lightweight PRD (objective + stories + AC): Feature enhancements. Ticket only: Bug fixes. Tech spec: Technical debt/refactoring.
Who writes the PRD?
The Product Owner leads PRD creation with input from Functional Lead, Technical Lead, Business Stakeholders, and UX Lead. It's a collaborative effort, typically in a 2-4 hour session per feature.
What's in a PRD?
Feature overview, objectives, business benefits, use cases, user stories with acceptance criteria, design references, dependencies, and explicitly out-of-scope items. See the PRD Template.
What does "PRD = Acceptance Criteria = Test Cases" mean?
When scenarios and use cases are well-detailed, they should read like test cases. QA should be able to derive test cases directly from the PRD without additional interpretation. The PRD becomes the single source of truth for dev, QA, and stakeholders.
Who approves the PRD?
Both Functional Lead and Technical Architect must sign off. This is the gate - nothing enters a sprint without an approved PRD (for features that require one) and finalized designs.
What if stakeholders disagree during PRD creation?
That's exactly why we have the session - to resolve disagreements before development starts. Functional Lead facilitates alignment. Unresolved items are documented as "Open Questions" and must be closed before sprint entry.
Where are PRDs stored?
PRDs are stored in Google Drive: Sequifi Product Docs in each feature's sub-folder.

Sprint Cycle

How long is a sprint?
Two weeks (10 working days).
What happens on Day 1?
Sprint Planning. 2-4 hours. The Product Owner presents prioritized backlog items, team discusses technical approach, stories are estimated (story points), and the team commits to a Sprint Goal and Sprint Backlog.
What's the Definition of Ready?
What a story needs before entering sprint: PRD approved (if applicable), acceptance criteria defined, UX/UI designs complete and reviewed, dependencies identified, and story sized under 8 points (otherwise split).
What's the Definition of Done?
What a story needs to be considered complete: code complete and peer reviewed, unit tests written and passing, integration tests passing, QA tested and approved, documentation updated, deployed to staging.
What happens in daily standups?
15-minute timebox, every day. Each team member answers: What did I complete yesterday? What am I working on today? Any blockers? Detailed discussions are taken offline.
When does backlog refinement happen during the sprint?
Mid-sprint (Day 4-5). 1-2 hours. This prepares items for the NEXT sprint, keeping the pipeline flowing.
Can we add items mid-sprint?
Only P0/P1 production issues. Everything else waits for the next sprint. If we constantly interrupt sprints, we lose predictability.
What if a story is bigger than estimated?
Flag it in standup immediately. Options: reduce scope, split the story, or carry it to next sprint. We don't extend sprints.
What happens to incomplete stories at sprint end?
If close to done, they carry over to the next sprint. Otherwise, they go back to the backlog for re-prioritization. We don't mark things "done" that aren't done.
What's Sprint Review?
Day 10, 1-2 hours. Team demos completed features to stakeholders. Stakeholder feedback is captured. Product Owner accepts or rejects stories based on acceptance criteria.
What's a Retrospective?
Day 10, 1 hour, after Sprint Review. Team reflects: What went well? What needs improvement? Output: 2-3 actionable improvements for next sprint with owners assigned.
Who attends Sprint Review vs Retrospective?
Sprint Review: PO, leads, scrum team, and stakeholders (open invitation). Retrospective: Technical Lead, PC, dev team, QA. Product Owner is optional. It's a safe space for the team.
How do you handle scope creep?
Technical Lead protects the sprint. New requests go to backlog for next sprint prioritization, unless it's a P0/P1 production issue that must be addressed immediately.
What if someone is on PTO during the sprint?
Capacity is confirmed during Sprint Planning. We account for PTO, holidays, and recurring meetings. The team commits to what they can actually deliver.

Quality & Testing

When does QA testing happen?
Continuously during the sprint as features complete. Days 8-10 are focused on final regression and sign-off.
What's the difference between staging and UAT?
Staging: Development testing environment. Features are tested as they're built. UAT: Pre-production environment. Full 7-day regression testing happens here before production release.
Why 7 days for UAT regression?
Comprehensive testing across all modules, all user roles, and all critical paths takes 8-12 hours of actual testing. Spread over a week allows for bug fixes, retesting, and thorough coverage. Rushed testing leads to escaped defects.
What's included in regression testing?
All modules, all user roles, all critical paths. Payroll flows get extra attention given their time-sensitivity. It's not just testing new features - it's ensuring we didn't break existing functionality.
Can QA block a release?
Yes. Quality Engineer has veto power on releases. If they haven't signed off, the release doesn't happen.
What if we don't have enough QA capacity?
We prioritize what gets tested. Critical paths and payroll flows are non-negotiable. Lower-risk areas may get lighter testing. Over time, we invest in automation to extend coverage.
What's the INVEST framework?
A quality checklist for user stories. Independent, Negotiable, Valuable, Estimable, Small, Testable. Stories meeting these criteria are well-defined and deliverable.
What are the 5 characteristics of a good user story?
Shippable: Can go to production independently. Consumable: Users get value when turned on. Well-defined: Product, Business, Tech all contributed. Estimatable: Team can guess effort. Time-bound: Fits in a sprint.
What's an "escaped defect"?
A bug that makes it to production without being caught in development or testing. We track escaped defects as a quality metric - the trend should be decreasing.
How do we measure quality?
Key metrics: Escaped defects (bugs in production), commitment vs delivered ratio (target >85%), P0/P1 incident frequency (should trend down), and regression test pass rate.

Release Process

When do standard releases happen?
Every Wednesday at 11 AM, after completing 7-day UAT regression testing.
Why Wednesday?
Mid-week provides Monday-Tuesday for final prep and Thursday-Friday for monitoring. It also avoids Tuesday and Friday, which are common payroll processing days for clients.
What's the Go/No-Go meeting?
Wednesday 9 AM, before release. Technical Architect, Quality Engineer, and Product Manager review readiness: all features complete, QA signed off, no P0/P1 bugs, release notes ready, rollback plan ready. Any of them can veto.
What can block a release?
P0 or P1 bugs found in UAT. Missing QA sign-off. Incomplete features that are core to the release. Missing rollback plan. Technical Architect, Quality Engineer, or Product Manager veto.
What if we miss the Wednesday window?
The release moves to the next Wednesday. We don't force a bad release - rushing costs more than waiting a week.
What happens after deployment?
2-hour monitoring window. Team checks: application loads, login works, core features function, error logs clean, performance normal, no spike in support tickets.
What's the support handoff?
Before a release is complete, we document: feature overview, user guide, configuration options, known limitations, troubleshooting guide, workarounds, affected clients, and escalation contacts. Support team must acknowledge receipt.
When is a release considered complete?
Not until support handoff is delivered and acknowledged. A feature in production without support documentation isn't complete.
How do stakeholders know what shipped?
Release notes are prepared before every Wednesday release and shared by Product Owner. Major features get demo recordings.
When do we rollback?
Rollback immediately: System down, core feature broken for all users, data corruption. Fix forward: Single feature broken with workaround available. See Rollback Procedure.
Who decides to rollback?
Technical Architect makes the final call on rollbacks.

Support & Production Issues

How do production issues get reported?
Client reports to CSM → CSM posts to #sequifi_support Slack channel with details → Support team triages → Logs in GitHub → Technical Lead classifies severity.
What information should CS include when reporting?
Client affected, priority assessment, functional area, issue description, steps to replicate (if possible), video/PostHog recording (recommended), and payroll cutoff time (if applicable).
How quickly are issues acknowledged?
Support team acknowledges in #sequifi_support within 1 hour for standard issues, 15 minutes for P0-PAYROLL.
What are the priority levels for bugs?
P0-PAYROLL: Time-critical payroll bug, cutoff <12 hours (2-4 hour resolution).
P1-PAYROLL: Payroll bug, cutoff 12-48 hours away (12-24 hour resolution).
P0 Critical: System down, data loss, security breach (4-8 hours).
P1 High: Core feature broken, multiple clients (24-48 hours).
P2 Medium: Feature degraded, workaround exists (3-7 days, next release).
P3 Low: Minor issues, cosmetic (2-4 weeks).
Who decides the priority level?
Technical Lead makes the final classification, with input from Technical Architect, Quality Engineer, and Customer Success. When in doubt, we escalate higher - it's easier to downgrade than to realize too late.
What's the difference between P0 and P0-PAYROLL?
P0-PAYROLL has a hard deadline - payroll cutoff is less than 12 hours away. Everything stops. War room spins up immediately. Regular P0 is critical but not time-bound to payroll processing.
What's a "war room"?
For P0-PAYROLL, the team immediately joins a video call and stays connected until the issue is resolved. All hands on deck, continuous communication, updates every 30-60 minutes to affected clients.
Do P0/P1 bugs skip testing?
They skip full UAT regression. We do targeted testing of critical paths only. Speed matters more. A proper permanent fix may be scheduled for the standard release if the hotfix was temporary.
What if we can't fix a P0 in time?
We explore temporary workarounds - manual fixes, data patches, feature flags to disable the broken component. We communicate transparently with affected clients. Permanent fix follows in standard release.
Where are bugs tracked?
All bugs are logged in GitHub: SEQ Bug Tracker.
What's the on-call structure?
Tier 1: Project Coordinator - first responder, triage (15 min response). Tier 2: Support Engineer + QA - develop and test fix (30 min). Tier 3: Technical Lead + Technical Architect - decisions, approvals, deployment.
What's an RCA?
Root Cause Analysis. Required within 48 hours of every P0. Uses the 5-Whys method to identify the systemic cause. Output includes immediate fixes (done), short-term improvements (this sprint), and long-term changes (next quarter).
Is the RCA about blaming people?
No. It's explicitly blameless. We're asking "what in our system allowed this to happen?" not "who wrote the bug?" Bugs are learning opportunities.
How do clients get updates during incidents?
CSM keeps the client informed. For war room situations, updates every 30-60 minutes until resolved. Transparency builds trust even during problems.

Roles & Responsibilities

What does the Product Owner do?
Manages the product backlog, prioritizes items, leads PRD creation, facilitates refinement sessions, presents items at Sprint Planning, accepts/rejects stories at Sprint Review.
What does the Functional Lead do?
Cross-team oversight, stakeholder management, validates business alignment, participates in PRD sign-off, makes go/no-go decisions on releases.
What does the Technical Architect do?
Architecture decisions across all teams, technical standards, code review for major changes, merge to main (production deploy), rollback decisions, and leads RCAs for P0 incidents.
What does the Technical Lead do?
Each team's Technical Lead combines technical leadership with Scrum Master responsibilities: makes technical decisions for the team, conducts code reviews, classifies bug severity, merges to staging for QA testing, facilitates sprint ceremonies, removes blockers, protects the sprint from scope creep, coaches the team on process, and tracks action items from retrospectives.
Why is the Technical Lead also the Scrum Master?
Combining these roles ensures that technical decisions and process facilitation are aligned. The Technical Lead has both the technical authority to make decisions and the process responsibility to keep the team moving. This reduces handoffs and ensures the person facilitating ceremonies also understands the technical constraints and trade-offs.
What's the difference between Technical Architect and Technical Lead?
Technical Architect (shared across all teams) handles architecture decisions, technical standards, and rollback decisions across the organization. Technical Lead (per scrum team) handles their team's technical decisions plus Scrum Master responsibilities like facilitating ceremonies and removing blockers.
What does the Quality Engineer do?
Owns release sign-off (can block releases), oversees regression testing, provides input on bug severity classification, ensures test coverage, maintains test automation.
What does the Project Coordinator do?
Tracks progress, handles reporting, first responder for support issues (Tier 1), coordinates communication, manages logistics.
Who can block a release?
Technical Architect, Quality Engineer, or Product Manager - any of them can veto at Go/No-Go.
Who decides bug priority?
Technical Lead makes the final classification, with input from Technical Architect, Quality Engineer, and Customer Success.
Who decides to rollback?
Technical Architect makes the final call on production rollbacks.
Who's accountable when a bug slips to production?
We don't blame individuals. The RCA process identifies systemic gaps. Was it requirements? Testing? Code review? Process? We fix the system, not point fingers.
What's the difference between leadership roles and scrum team roles?
Shared roles (Functional Lead, Technical Architect, Product Owner, Design Team) span all teams and handle cross-team coordination. Scrum team roles (Technical Lead, developers, QA, PC) are within a single team. Customer Success Team works directly with clients.

Stakeholder Involvement

When are stakeholders involved in the process?
Bi-weekly refinement sessions (requirements, priorities). PRD sessions for major features (2-4 hours). Sprint Reviews every two weeks (demos, feedback). You're not in daily standups - that's execution team only.
How much time commitment is expected?
Roughly 4-6 hours per sprint: bi-weekly refinement (1-2 hours), Sprint Review (1-2 hours), plus PRD sessions for features you're involved in (2-4 hours as needed).
How do I know what's being worked on?
Attend Sprint Reviews to see demos. On the Home page, use the Sequifi Q1 26 and Prodigy Q1 26 roadmaps for planned work. Release notes share what shipped each Wednesday.
Can I request a feature and know when it will ship?
We can tell you which sprint it's planned for, but we commit sprint-by-sprint, not months out. Focus is speed and adaptability over precise long-term dates.
How do I escalate something urgent?
For production issues: report through CSM → #sequifi_support. For priority escalation on backlog items: speak with Product Owner or Functional Lead.
What if I disagree with prioritization?
Raise it in refinement sessions. Provide business context and impact. The Product Owner makes the call, but priorities are negotiable with good justification.
Can I attend Sprint Planning?
Sprint Planning is for the execution team. Your input happens earlier - in refinement and PRD sessions. Sprint Review is the ceremony for stakeholder participation.
How do I give feedback on delivered features?
Sprint Review is the formal venue. You can also provide feedback anytime to the Product Owner, which gets captured as new backlog items for iteration.

Tools & Documentation

Where is everything?
Product Backlog: Index
Bug Tracking: GitHub: SEQ Bug Tracker
Feature Development: GitHub: Feature Projects
Designs: Figma
PRDs & Docs: Google Drive
Communication: Slack (#sequifi_support)
Is there a template for PRDs?
Yes. See PRD Template.
Is there a template for RCAs?
Yes. See RCA Template.
What other templates exist?
User Story template, Bug Report template, Definition of Ready checklist, Definition of Done checklist, Pre-Deployment checklist, Post-Deployment checklist, Code Review checklist, Rollback Procedure. All on the Templates page.
Won't all this documentation become a burden?
We right-size it. Small changes get small docs. Templates reduce effort. A well-written PRD is requirements, acceptance criteria, and test cases in one - do it once, use it three ways.

Adoption & Change

When does this take effect?
Starting from the next sprint. Some practices (like 7-day UAT) may phase in over 2-3 sprints as we build the rhythm.
How do we train people?
The wiki documents everything. We'll do walkthrough sessions with each team. Templates and checklists make it easy to follow. Learn by doing with support.
What if this doesn't work for our team?
We iterate. Retrospectives every sprint surface what's working and what's not. If something adds friction without value, we change it. This is v1.0, not the final version.
How do we suggest improvements?
Raise it in your team's retrospective. If it's cross-team, bring it to Scrum of Scrums or Technical Sync. Good ideas get incorporated.
Does this require more people?
No. It's about working smarter with clear gates, not adding headcount. The upfront investment in quality (refinement, design ahead, PRDs) reduces rework downstream.
What's the biggest change from current practice?
Probably the 7-day UAT regression window and design running one sprint ahead. These add some lead time but significantly reduce escaped defects and last-minute scrambles.

Metrics & Measurement

What metrics do we track?
Velocity: Story points completed per sprint. Commitment vs Delivered: Target >85%. Escaped Defects: Should trend down. P0/P1 Frequency: Should trend down. Burndown: Daily progress tracking.
What's velocity?
Total story points completed (not just started) per sprint. We track 3-sprint rolling average. Used for capacity planning and release forecasting.
Should we try to maximize velocity?
No. Metrics are guides, not goals. Gaming velocity defeats the purpose. Focus on sustainable delivery of value. A stable velocity trend is better than an artificially inflated one.
What's commitment vs delivered?
The ratio of what we committed to in Sprint Planning vs what we actually completed. Target is >85%. It measures planning accuracy.
What's an escaped defect?
A bug that reaches production without being caught. We track this to measure quality. The trend should be decreasing over time.
How do we measure success of this framework?
Fewer escaped defects. More predictable delivery (stable velocity). Faster incident response. Higher stakeholder satisfaction. Less firefighting, more planned work.
What's a burndown chart?
A daily visualization of remaining work in the sprint. Shows if we're on track, ahead, or behind. Helps identify problems early.
How often are metrics reviewed?
Sprint-level metrics in Sprint Review/Retro. Trend analysis monthly or quarterly. Metrics should inform decisions, not create bureaucracy.