Opus  is a versatile audio codec, with a variable sample rate and bitrate, suitable for both music and speech. It is defined in RFC 6716  and required by WebRTC .
Opus can operate at various sample rates, from 8 KHz to 48 KHz, and at variable bitrates, from 6 kbit/sec to 510 kbit/sec.
The RTP payload format defined for Opus in RFC 7587  explains the use of media type parameters in SDP, and this article aims to analyze them and show in particular how "asymmetric streams" can be achieved.
This is an example of SDP defining an Opus offer or answer:
m=audio 54312 RTP/AVP 101
a=fmtp:101 maxplaybackrate=16000; sprop-maxcapturerate=16000;
maxaveragebitrate=20000; stereo=1; useinbandfec=1; usedtx=0
Let's clarify one thing immediately, about rtpmap.
As specified in RFC 7587 Ch. 7, the media subtype portion of rtpmap must always be 'opus/48000/2' (48000 samples/sec, 2 channels), regardless of the actual sample rate used. So you can leave happily this configuration element out of your thoughts, even if you want to use a narrowband version of Opus.
Another less than intuitive aspect to clarify is how RTP timestamp are managed as the RTP represents audio with variable sample rates.
From RFC 7587, Ch. 4.1:
Opus supports 5 different audio bandwidths, which can be adjusted during a stream. The RTP timestamp is incremented with a 48000 Hz
clock rate for all modes of Opus and all sampling rates. The unit
for the timestamp is samples per single (mono) channel. The RTP
timestamp corresponds to the sample time of the first encoded sample
in the encoded frame. For data encoded with sampling rates other
than 48000 Hz, the sampling rate has to be adjusted to 48000 Hz.
This can be interpreted in this way: "The timestamp must always be set as if the sample rate is 48000 Hz."
Default case: the encoder is set at 48 KHz. A 20 msec frame contains 960 (48000 samples/sec * 20 msec) samples.
When the encoder is set at 8 KHz, instead, a 20 msec frame contains 160 (8000 samples/sec * 20 msec) samples. The timestamp in the RTP packet must be adapted, so that the sample rate is normalised to 48 KHz, by multiplying by 6 (48000/8000) the number of samples.
In both cases though a 20 msec frame will have an RTP representation with 960 "time clicks".
Now we start looking at the parameters that help the two parties in setting their encoders and decoders.
From RFC 7587, Ch. 6.1:
maxplaybackrate: a hint about the maximum output sampling rate that
the receiver is capable of rendering in Hz. The decoder MUST be
capable of decoding any audio bandwidth, but, due to hardware
limitations, only signals up to the specified sampling rate can be
played back. Sending signals with higher audio bandwidth results
in higher than necessary network usage and encoding complexity, so
an encoder SHOULD NOT encode frequencies above the audio bandwidth
specified by maxplaybackrate. This parameter can take any value
between 8000 and 48000, although commonly the value will match one
of the Opus bandwidths (Table 1). By default, the receiver is
assumed to have no limitations, i.e., 48000.
This optional parameter is telling the encoder on the other side: "Since I won't be able to play at rates higher than `maxplaybackrate` you can save resources and bandwidth by limiting the encoding rate to this value."
A practical case is transcoding from Opus to G.711, where anyway the final playback rate will be 8000 Hz.
The specular (and still optional) parameter is sprop-maxcapturerate, defined in RFC 7587 Ch. 6.1:
sprop-maxcapturerate: a hint about the maximum input sampling rate
that the sender is likely to produce. This is not a guarantee
that the sender will never send any higher bandwidth (e.g., it
could send a prerecorded prompt that uses a higher bandwidth), but
it indicates to the receiver that frequencies above this maximum
can safely be discarded. This parameter is useful to avoid
wasting receiver resources by operating the audio processing
pipeline (e.g., echo cancellation) at a higher rate than
necessary. This parameter can take any value between 8000 and
48000, although commonly the value will match one of the Opus
bandwidths (Table 1). By default, the sender is assumed to have
no limitations, i.e., 48000.
This parameter is telling the decoder on the other side: "Since I won't be able to produce audio at rates higher than `sprop-maxcapturerate` you can save resources by limiting the decoding rate to this value."
A practical example is transcoding from G.711 to Opus, with the source always limited to a capture rate of 8000 samples/sec.
An additional element, maxaveragebitrate, refers to the maximum average bitrate that the decoder will be able to manage. This is a hint that it's not worth for the remote encoder to use higher bitrates, and that it can instead save resources.
From RFC 7587, Ch. 6.1:
maxaveragebitrate: specifies the maximum average receive bitrate of
a session in bits per second (bit/s). The actual value of the
bitrate can vary, as it is dependent on the characteristics of the
media in a packet. Note that the maximum average bitrate MAY be
modified dynamically during a session. Any positive integer is
allowed, but values outside the range 6000 to 510000 SHOULD be
This parameter is telling the remote encoder: "Since my decoder can't handle bitrates higher than maxaveragebitrate, you can save computation power and bandwidth by limiting your encoder bitrate to this value."
A practical example could be a mobile client that wants to ensure the download bandwidth is not saturated. Note that this value refers only to the initial negotiation (SDP offer/answer), while the parties can negotiate different values during an active call.
Given the interpretations above, it seems also possible to negotiate asymmetrical streams: the two entities involved can encode and decode at different rates when appropriate.
In particular, if we imagine an entity with local parameters:
maxplaybackrate=Da; sprop-maxcapturerate=Ea; maxaveragebitrate=Fa
and remote parameters:
maxplaybackrate=Db; sprop-maxcapturerate=Eb; maxaveragebitrate=Fb
then this entity can set the decoder at a sample rate of min(Da, Eb) and the encoder at a sample rate of min(Ea, Db) and bitrate at Fb.
Similarly and intuitively, the other entity involved can set the decoder at a sample rate of min(Db, Ea) and the encoder at a sample rate of min(Eb, Da) and bitrate Fa.
All these values are optional, as mentioned above, so there are various permutations possible here. In particular when maxaveragebitrate is not provided, then it's assumed to be the maximum (510000 bps).
I hope this can clarify some subtleties, or at least open a table for discussion and eventually lead to a better understanding of the topic.