Designing a collaborative learning architecture for institutional investment professionals to navigate portfolio crisis scenarios through structured peer interaction.
Overview
I designed a comprehensive case-based peer learning module for institutional investment professionals dealing with "The Denominator Effect Dilemma"—a realistic portfolio crisis scenario where professionals must analyze trade-offs, navigate competing stakeholder pressures, and defend crisis management decisions.
This project showcases the instructional design process, methodology, and artifacts created during the design phase. Rather than demonstrating a built course, it highlights my ability to design sophisticated learning architectures, facilitate peer learning, and create comprehensive implementation materials grounded in learning science.
The target audience is institutional investors who must make complex decisions under uncertainty, with incomplete information, and conflicting stakeholder priorities—skills that benefit from collaborative case analysis rather than traditional lecture-based training.
The Challenge
The learning challenge required designing for crisis decision-making that involves:
The instructional design challenge was creating a learning experience that:
My Approach
I applied evidence-based instructional design frameworks throughout the project:
Identified the need for crisis decision-making training that goes beyond compliance-focused instruction. Analyzed target audience characteristics, prerequisite knowledge requirements, and learning context constraints including time pressure and distributed teams.
Designed a 7-phase partnership workflow to collaborate with subject matter experts on creating realistic case scenarios. Created SME collaboration guides and case design briefs ensuring the case had appropriate complexity, realistic messiness, and multiple defensible options.
Developed the 5-touchpoint peer learning protocol that structures collaboration from initial individual analysis through board defense simulation. Each touchpoint deliberately alternates between individual and collaborative work to maximize learning.
Created a dual rubric system measuring both individual analysis quality (60%) and peer learning contribution (40%). Designed authentic performance tasks rather than traditional knowledge checks, ensuring learners demonstrate actual decision-making capability.
Developed comprehensive facilitator guides, discussion scaffolds, and delivery option documentation. Ensured the learning design could scale across multiple delivery modes and cohorts while maintaining pedagogical integrity.
The Solution
I designed a complete instructional architecture with three primary components:
Touchpoint 1: Initial Stance (Asynchronous, 30 min) — Learners individually analyze the case and post their preliminary recommendation before seeing peer responses, establishing baseline thinking and creating ownership.
Touchpoint 2: Small Group Stakeholder Analysis (Synchronous, 45 min) — Groups of 4 deeply analyze one stakeholder perspective (Board Chair, CIO, CFO, GP Partners), forcing empathy and perspective-taking through collaborative work.
Touchpoint 3: Gallery Walk (Synchronous, 25 min) — Groups present stakeholder analyses while others challenge assumptions using structured protocols, creating productive cognitive dissonance through exposure to conflicting viewpoints.
Touchpoint 4: Cross-Pollination (Asynchronous, 45 min) — Individual reflection integrating peer insights into revised recommendations, with explicit metacognition about how thinking evolved.
Touchpoint 5: Board Defense (Synchronous, 90 min) — Learners present recommendations to simulated board members (peers role-playing) who ask probing questions, developing confidence in defending decisions under pressure.
Minute-by-minute session flow with facilitation moves, discussion quality indicators, troubleshooting common issues, and self-reflection criteria for facilitator development.
ABCD format objectives aligned to Bloom's taxonomy (analysis, evaluation, synthesis levels), with clear assessment evidence and measurability criteria.
3.5-hour structure breakdown, cohort sizing recommendations (16-20 optimal), delivery mode adaptations, technology requirements, and accessibility considerations.
Detailed specifications for all 5 touchpoints including timing, purpose, facilitator roles, and pedagogical rationale grounded in social constructivism.
Dual assessment approach: Individual Analysis (60% - trade-off clarity, stakeholder consideration, evidence use) and Peer Contribution (40% - perspective sharing, constructive challenge, synthesis).
9 tools ensuring productive peer discussions including role cards (Analyst, Devil's Advocate, Synthesizer), discussion protocols, question stems, and challenge cards.
SME instructions for creating realistic case scenarios with requirements for ambiguity, multiple defensible options, stakeholder conflicts, and quantitative exhibits.
7-phase instructional designer and SME partnership process covering needs analysis, case development, review cycles, and quality assurance checkpoints.
Portfolio presentation strategy for discussing this project in interview contexts, emphasizing process over product and methodology over tools.
Fully Online: Zoom breakout rooms, Miro boards for collaborative work, LMS discussion forums for asynchronous touchpoints.
In-Person: Physical breakout spaces, poster boards for gallery walk, printed role cards and scaffolds.
Hybrid: Mixed remote and in-person participants with careful attention to equity and engagement across both groups.
Asynchronous: Discussion board adaptation over one-week timeframe, though acknowledges this loses 60% of the real-time dialogue value.
Design Decisions
Several key design decisions demonstrate evidence-based instructional design expertise:
Rationale: Complex decision-making benefits from multiple perspectives rather than seeking a single "right answer" from an expert. Social constructivism demonstrates knowledge is built through discourse.
Implementation: Designed 5 structured touchpoints ensuring productive collaboration rather than unguided group work that can devolve into unproductive conversation.
Rationale: Avoids free-riding while rewarding both quality thinking AND collaborative behavior. 60% individual analysis measures decision-making capability; 40% peer contribution measures collaboration skills.
Assessment: Individual rubric evaluates trade-off clarity and evidence use. Peer rubric evaluates perspective-sharing, constructive challenge, and synthesis contributions.
Rationale: Real decisions involve incomplete data, competing stakeholders, and ambiguity. Case requirements specify multiple defensible options with no clear "right" answer, time pressure, and conflicting priorities.
Outcome: Learners practice professional judgment and argumentation rather than formula application, developing transferable crisis management skills.
Rationale: Preserves learner agency and avoids "sage on stage" dynamics. Facilitator role focuses on keeping discussions productive, managing time, and surfacing diverse views—not providing answers.
Scaffolds: Created 9 discussion tools (role cards, question stems, protocols) enabling facilitators to support learning without directing conclusions.
Impact
Since this course has not been built or deployed, impact focuses on design quality indicators and projected learning outcomes based on the comprehensive implementation package created:
Design Quality Indicators:
Projected Learning Outcomes:
Business Value:
Reflection
SME collaboration is critical for authentic cases. Realistic crisis scenarios require deep domain expertise that instructional designers don't possess. The 7-phase SME collaboration workflow I created ensures partnership success by clarifying roles, setting expectations for case complexity (multiple defensible options, realistic messiness), and building in review cycles for pedagogical and content quality.
Scaffolding prevents chaos in peer learning. Early in the design process, I recognized that unstructured peer discussion can devolve into unproductive conversation or be dominated by one voice. Creating 9 discussion scaffolds (role cards, question stems, challenge cards, protocols) ensures discussions stay focused while preserving intellectual freedom. This balance between structure and openness is what makes peer learning work.
Assessment drives behavior. The dual rubric—60% individual analysis, 40% peer contribution—prevents free-riding while explicitly valuing collaboration. Learners who know they're assessed on both dimensions behave differently than those evaluated on individual work alone. Making peer contribution an explicit, weighted assessment criterion signals that collaboration is core to learning, not an optional add-on.
Facilitator training matters more than I initially thought. A sophisticated learning design only works if facilitators can execute it. Creating the comprehensive facilitator guide with minute-by-minute timing, facilitation moves mapped to learning science principles, and troubleshooting for common pitfalls ensures quality delivery. The self-reflection criteria help facilitators continuously improve rather than just "following the script."