Book a Call

Microsoft

How Microsoft Almost Doubled Their Feedback Response Rate at TechDays Sweden

Microsoft's annual developer conference used Ventla to nearly double evaluation response rates — by collecting feedback immediately after each session, not at the end.

TechnologyAnnual ConferenceEnterprise (2000+)Sweden
How Microsoft Almost Doubled Their Feedback Response Rate at TechDays Sweden

The problem wasn't that attendees didn't have opinions — it was that by the time the evaluation arrived, those opinions had blurred. Microsoft TechDays, Sweden's largest developer conference with thousands of IT professionals and over 100 sessions, used Ventla to nearly double their evaluation response rate by changing when feedback was collected.


The situation

TechDays is Microsoft's flagship annual conference for the Swedish developer community — thousands of IT professionals, managers, and developers across more than 100 sessions over several days. Speakers fly in from across Europe and North America. The sessions range from hands-on technical workshops to strategic keynotes on where the software industry is heading.

For an event that prides itself on the quality of its content, the quality of feedback on that content matters enormously. Microsoft uses evaluation data to decide which speakers to invite back, which topics to expand, and how to shape the conference programme year over year.

But their previous evaluation system had a structural problem. By the time attendees received a post-conference survey, they'd sat through dozens of sessions. Individual impressions had faded. The feedback that came back was superficial — overall satisfaction scores, not the session-by-session insight the programme team needed.

What they needed

Running a conference of this scale, Microsoft needed to:

  • Capture feedback at the session level, not just the event level
  • Collect evaluations while impressions were still fresh — ideally immediately after each session
  • Communicate agenda, speaker, and session information efficiently across a large, dispersed audience
  • Enable networking between the thousands of attendees who shared professional interests
  • Do all of this through a single platform attendees could access without friction

How they used Ventla

The pivot was straightforward but significant: instead of sending one evaluation at the end of the conference, Microsoft configured Ventla to push a brief feedback prompt immediately after each session ended.

Attendees received a notification on their phone — the one they were already using for the agenda and session details — prompting them to rate the just-completed session while they were still in the room or walking to the next one. Three questions, thirty seconds. The feedback captured was specific and immediate.

Beyond evaluation, Ventla served as the operational hub for the conference. Speakers' profiles, session descriptions, real-time agenda updates, and networking tools were all accessible in the same interface. Attendees who'd been using the app to navigate their day were already in the right place when the evaluation notification arrived.

What happened

Evaluation response rates at TechDays nearly doubled compared to previous years. That number is significant not just as a metric but as an operational outcome — Microsoft's programme team went from incomplete, generalized feedback to a comprehensive, session-level dataset they could actually act on.

The quality of the feedback changed too. Because attendees were responding in the moment, the comments were specific: reactions to particular arguments a speaker made, views on whether the technical depth matched what was advertised, concrete suggestions for what they'd want to see next year.

For a conference that's been running for years and wants to keep improving, that specificity is the difference between "we think it went well" and "we know what to do differently."

What this means for similar organizations

The lesson from TechDays is generalizable to any event that runs multiple sessions and cares about the quality of individual content: the timing of feedback collection matters as much as the questions you ask.

A post-conference survey sent two days later competes with everything else in an attendee's inbox. An in-app prompt sent thirty seconds after a session ends captures something more honest. The session is still in front of them. Their reaction is uncontaminated by everything that came after.

For technology conferences specifically — where audiences are comfortable with mobile tools and the bar for session quality is high — session-level feedback collected through an event app produces the kind of data that makes programme improvement systematic rather than instinctive.


Industry: Technology · Event type: Annual Developer Conference · Attendees: Thousands · Region: Sweden

Running a conference where session quality matters and your feedback data isn't telling you enough? Talk to Ventla — we'll show you what the data from events like this actually looks like.