Google Meet Upcoming Features 2020

You walk into the conference room.
Everyone’s seated.

The TV is showing the Meet grid. The room kit is already connected.
And then someone joins from their laptop with audio on.

Screech. Echo. That painful “can you mute?” chorus.

Here’s the thing: most meeting misery starts in the first 10 seconds.
Not with strategy. With joining the wrong way.

Google’s answer is a quiet one. Literally.
Ultrasound.

Key takeaways

What you’ll learn in 60 seconds

  • Google Meet can detect you’re in a compatible room using an ultrasonic signal and then push you into Companion mode automatically.
  • This reduces echo loops, wrong-room check-ins, and “who’s causing feedback?” moments.
  • It’s great… until it isn’t. Mic permissions, audio filters, headsets, and large meetings can break it.
  • Admins can control it at the room level, and users can still check in manually.

Why this matters more than it looks

Meetings are already eating the calendar alive.
People feel it in their bones.

When knowledge workers spend triple-digit hours each year in unnecessary meetings, the tolerance for any added friction drops fast. And when teams are overloaded, even tiny delays get emotionally expensive.

And the cost isn’t just “annoying.”
It’s measurable.

If meetings consume a big chunk of paid time, the “join correctly” moment becomes a real operational problem, not a UX nitpick.

Bold insight: The fastest way to improve meeting quality is to reduce “start-up failure,” not to add new in-call features.

A quick, concrete ROI example

No fantasy math. Just a simple scenario.

  • 1 office has 10 conference-room meetings per day
  • 8 attendees typically open Meet on their own device
  • ultrasound room detection saves 10 seconds per attendee by removing search + wrong-button mistakes

That’s:
10 meetings × 8 people × 10 seconds = 800 seconds/day = 13.3 minutes/day.

Over ~20 workdays, that’s 266 minutes/month.
About 4.4 hours per month.

Now multiply by 10 rooms in one building.
That’s ~44 hours/month of reclaimed time from one tiny join-step improvement.

Not magic. Just less mess.

What Google’s ultrasound “proximity detection” is actually doing

This isn’t medical ultrasound.
No imaging. No scanning. No spooky sci-fi.

It’s closer to an audio “beacon.”

The flow (what happens in real life)

  1. Your conference room hardware emits a high-frequency ultrasonic signal.
  2. Your phone’s microphone detects it only around the pre-join screen (the “green room”).
  3. Meet recognizes the specific room and highlights Use Companion mode.
  4. When you join that way, you’re checked into the correct room without extra steps.

So the feature isn’t only about speed.
It’s “choice architecture.”

It nudges you away from the dangerous button.

Where it shows up

  • In the Google Meet mobile app
  • Also via Meet inside the Gmail app, which matters because plenty of people tap meetings from email while walking in

And yes, it’s been on laptops already.
Mobile is the practical upgrade.

Because phones are the thing people actually have in hand at the doorway.

The real win: it prevents the “double-audio” trap

Companion mode exists for a reason.
Most people ignore it.

They join normally. Mic on. Speakers on.
And then the room system is also doing audio.

That’s how you get feedback, echoes, and that weird robotic reverb.
Every. Single. Time.

Ultrasound room detection helps by making the “safe join” obvious before the mistake happens.

Companion mode is not a lesser experience

It’s a second-screen mode designed for people physically in the room.

Typical Companion mode perks:

  • chat
  • captions
  • reactions / hand raise
  • polls and Q&A (where enabled)
  • screen sharing from your own device without hijacking the room kit

So you get participation without turning the room into a sound war.

Ultrasound vs the other “join the room” tricks

MethodWhat users doSpeedFailure patternAdmin overheadBest for
Manual room codeType a code / pick the roomMediumTypos, wrong roomMediumSmall orgs, low hardware maturity
QR codeScan on the door or screenFastCamera permissions, bad signageLowVisitors, shared spaces
NFC/Bluetooth proximityTap or auto-detect nearbyFastDevice compatibility, radio quirksMediumControlled device fleets
Ultrasound proximity (Meet)Open Meet; it detects roomFastMic permissions, audio filteringLow–MediumMeet rooms with consistent hardware
“Just join normally”Click Join nowFast (until it’s not)Echo, feedback, chaosNoneNobody. Stop doing this.

What’s different about ultrasound: it doesn’t require camera alignment, doesn’t need a tap, and it works with the behavior people already have: “I open Meet.”

The catches nobody tells you first

This feature is helpful.
But it’s not bulletproof.

When proximity detection can fail

These are the common breakpoints:

  • Microphone permissions are denied
  • The meeting is joined from certain special shortcut pages instead of the standard Meet flow
  • Your OS or device has audio processing that filters out ultrasonic frequencies
  • You’re using certain headsets or dongles as the active mic
  • Very large meetings can block the flow (yes, this matters)

And if it fails, people revert to the old habit.
Join normally. Echo returns.

So you need a fallback plan, not just a rollout email.

A practical troubleshooting checklist (for real humans)

If “Room detected” never appears:

  • Turn on mic permissions for Meet (or Gmail if joining from there).
  • Temporarily disable voice-isolation style mic processing.
  • If you’re on a headset: switch the input mic back to the device mic for the join screen.
  • Join from the standard Meet entry (not a shortcut page).
  • If all else fails: join Companion mode manually and use the manual room check-in option.

Short. Boring. Effective.

Admin + IT rollout checklist that won’t backfire

If you’re an admin, treat this as a behavior change project.
Not a feature toggle.

1) Confirm room readiness

  • Verify your conference rooms are using supported Meet hardware that emits the ultrasonic signal.
  • Standardize setups across rooms. Mixed gear creates mixed results.

2) Set expectations

Say it plainly:

  • “If you’re in the room, join on your laptop/phone in Companion mode.”
  • “Do not join with full audio unless you’re remote.”

No corporate fluff.
Just rules.

3) Watch for the sneaky blockers

  • Company-managed devices with aggressive audio enhancement settings
  • Accessibility settings that alter mic behavior
  • Headsets that become the default input device and block ultrasonic pickup

4) Enforce minimum app versions

This is where rollouts die quietly.

If your mobile app is behind, users won’t see the prompt.
They’ll assume the feature “doesn’t work.”

So put minimum versions into your internal release notes and device management checks.

Privacy and trust: what you should tell your team

People hear “ultrasound + microphone” and tense up.
Fair.

The clean explanation is simple:

  • Meet uses the microphone briefly around the pre-join screen to detect a high-frequency signal from the room.
  • It’s meant for room detection and join guidance, not recording conversations.
  • Admins can enable/disable it per room, and users can switch it off in controls.

Don’t oversell it.
Don’t get cute.

Just explain the scope and the controls, then move on.

FAQs

Does this mean Google Meet is “listening” all the time in the room?

No, room detection is designed to run around the pre-join experience and a short window after joining in Companion mode.

Why does it push Companion mode instead of letting me join normally?

Because “join normally” from inside a room creates echoes and feedback when the room kit is already handling audio.

What should I do if it never detects the room?

Enable mic permissions, disable heavy mic processing (like voice isolation), avoid headset-only input during join, or use manual room check-in.

Leave a comment

Your email address will not be published. Required fields are marked *