* fix(toolbar): make button hover bigger
* fix(toolbar): make hangup button bigger
* fix(always-on-top): make toolbar and buttons same sizes as main toolbar
* fix(toolbar): change some tooltips
* fix(toolbar): adjust side panel and filmstrip for new toolbar sizes
It seems that the external API will not send any event to let the sdk
consumer know that the conference has failed if the problem occurs at
the establishing of XMPP connection stage. That's because the config was
loaded successfully, but the conference instance does not exist yet, so
neither base/config nor base/conference will emit any failure.
Make the external API emit CONFERENCE_WILL_JOIN early on SET_ROOM action
which occurs before the XMPP connection is created. At this point we
know that config has loaded and if there's a valid conference room to
be joined. We were thinking of doing that even on CONFIG_WILL_LOAD,
but that seemed to be to risky at this point.
With this the RN component and the consumer app can share same CallKit
provider, configuration, and enable to be part of multiple listeners of
the CallKit flow events. The main driver of this is to enable the
consumer app to be able to report an incoming call to the OS before
loading the JitsiMeetView. Once the user answers the call, the app can
instantiate a JitsiMeetView, pass the CallKit call UUIID, and the Jitsi
Meet components will handle the connection and report back to CallKit
that the call has been established.
Hristo Terezov, Chris Cordle, and I/Lyubomir Marinov agreed that we'd
try to use "invite" & "invitee(s)" in Web/React's iframe API,
mobile/react-native's SDK invite API, and internally for the purposes of
consistency, ease of understanding, etc.
This can happen if there are multiple JitsiMeetView instances are active at the
same time, because there is a single bridge, which means all of them would get
the events.
Currently enterPictureInPicture action can only be dispatched when
the app is on the conference view and the enter PiP button is displayed,
so no check should be necessary.
Activity.enterPictureInPictureMode method must be invoked synchronously
on userLeaveHint callback in order to be sure that the current Activity
is still visible (does not transit to PAUSED state). Previously if the
asynchronous processing would be delayed enough for the Activity to go
into the PAUSED state it will be too late to go into the PiP mode.
To reduce the amount of motion that has to be blurred, use a canvas
to essentially set the FPS of the video background. This canvas
component is behind a temporary feature flag, as well as being able
to disable the blur, so it can be played around with on deployed
environments.
- Update font files to add new icon.
- Update markup and style so the icon has a small background
to fill in the text of the icon.
- Remove some css transitions that don't seem to do much.
In order to be able to add analytics to the deep-linking pages the
lib-jitsi-meet initialization has been moved so it happens earlier.
The introduced `initPromise` will eventually disappear, once conference is
migrated into React and / or support for Temasys is dropped. At that stage, it
can be turned into a sync function which all platforms share.
* Button conditionally shown based on if the feature is enabled and available
* Hooks for launching the invite UI (delegates to the native layer)
* Hooks for using the search and dial out checks from the native layer (calls back into JS)
* Hooks for handling sending invites and passing any failures back to the native layer
* Android and iOS handling for those hooks
Author: Ryan Peck <rpeck@atlassian.com>
Author: Eric Brynsvold <ebrynsvold@atlassian.com>
* Adds in memory log storage, to be used while testing.
Enabling it only when config.debug is set, a configuration provided by jitsi-meet-torture.
* Moves to using config.testing.testMode property for logs storage.
* Fixes comments.
Activity.enterPictureInPictureMode can fail for a couple of reasons
mentioned in the JSDoc:
"The system may disallow entering picture-in-picture in various cases,
including when the activity is not visible, if the screen is locked or
if the user has an activity pinned."
It seems to be safe to assume that those cases will be caught by
a RuntimeException handler (only RuntimeExceptions can be left without
explicit catch block).
Anyway the root cause for problems is the fact that the current process
for going to the picture in picture mode is not synchronised with
Activity's lifecycle. On Activity's "userLeaveHint" callback we dispatch
async task to the JS code which only then after dispatching some more
stuff eventually call native method that enter PiP. In case we spend too
much time on the JS side and the Activity goes to PAUSED state the call
will fail with IllegalStatException: "activity is not visible",
"activity is paused" etc. This means with this fix the app will not
crash, but we'll see it sometimes not go to the PiP mode as expected.
* feat(recording): show the YouTube live stream URL
- From the start live stream dialog, push up the broadcast ID
of the chosen broadcast. It is assumed the ID can be used to
create the YouTube link.
- Listen for lib-jitsi-meet to emit updates of the known live
stream URL, shove it into redux, and have InfoDialog display
it.
* ref(info): pass in dial in and live stream url
Passing these values in should trigger AtlasKit InlineDialog
to re-render and reposition itself.
* ref(info): use conference existence as trigger for autoshowing dialog
* feat(info): add live stream link to invite copy
* Revert "ref(info): use conference existence as trigger for autoshowing dialog"
This reverts commit 1072102267.
* hidden -> url
* _onClickHiddenURL -> _onClickURLText
Since the main conference container is no longer "clickable" there must
be a way for clicking on the "large video". A clickable TestHint nested
in ParticipantView makes it easier for dealing with the fact that the
click handler is not always on the same component (required for the
pinch and zoom feature to work correctly).
Allows to bind a click handler to a TestHint.
When a mobile test wants to click an UI element it must be able to
locate it through the accessibility layer. Now the problem with that is
that there is currently no uniform way for finding element on both iOS
and Android. This problem is solved by TestHint component which takes
an id parameter which then can be specified in the corresponding java
TestHint class in jitsi-meet-torture to easily find it. By being able to
add a click handler to a TestHint, it's possible to duplicate original
handler under nested TestHint and then find it easily on the torture
side.
Adds the logic to render TestHint only when the test mode is enabled
in order to be able to put independent TestHints in other places than
the TestConnectionInfo component.
Be explciit about the appearance we desire, since each mounted StaturBar
component will override the existing values. In this case, the problem was
caused because the default on iOS is dark, whereas it's light on Android.
Set it to light so it works consistently across both, which is what we want.
It's too sensitive and most of the time I cannot perform an onPress. In
contrast, the builtin/default/standard onPress is noticeably more
forgiving. While we fix the sensitivity of "pinch to zoom", don't use
its onPress unless absolutely necessary i.e. use it only for desktop
streams.
Zoltan Bettenbuk suggested the following:
const state = getState();
if (desiredTypes.length === 0) {
- const { audio, video } = state['features/base/media'];
-
- audio.muted || desiredTypes.push(MEDIA_TYPE.AUDIO);
- video.muted || desiredTypes.push(MEDIA_TYPE.VIDEO);
+ const startAudioOnly = getPropertyValue(state, 'startAudioOnly');
+ const startWithAudioMuted
+ = getPropertyValue(state, 'startWithAudioMuted');
+ const startWithVideoMuted
+ = getPropertyValue(state, 'startWithVideoMuted');
+
+ if (!startWithAudioMuted) {
+ desiredTypes.push(MEDIA_TYPE.AUDIO);
+ }
+ if (!startAudioOnly && !startWithVideoMuted) {
+ desiredTypes.push(MEDIA_TYPE.VIDEO);
+ }
}
const availableTypes
The final commit is really a different implementation of the same idea
but takes into account that the state of base/media already contains the
intent of the URL and notices the delay in the realization of the
background app state.
Additionally, unbreaks one more case where setAudioOnly is incorrectly
dispatched on CONFERENCE_LEFT or CONFERENCE_FAILED and, consequently,
overrides the intent of the URL.
* feat(Deeplinking): Implement for web.
* ref(unsupported_browser): Move the mobile version to deeplinking feature
* feat(deeplinking_mobile): Redesign.
* fix(deeplinking): Use interface.NATIVE_APP_NAME.
* feat(dial_in_summary): Add the PIN to the number link.
* fix(deep_linking): Handle use case when there isn't deep linking image.
* fix(deep_linking): css
* fix(deep_linking): deeplink -> "deep linking"
* fix(deeplinking_css): Remove position: fixed
* docs(deeplinking): Add comment for the openWebApp action.
* fix(recording): fetch events also for available broadcasts
Only "persistent" broadcasts were being fetched using the
YouTube API. Fetching "all" will get persistent broadcasts
and events. If events use a custom encoder then the stream
key can be obtained. If google hangouts is used for the event
then a stream key will not be obtainable; in those cases
input empty string as the stream key.
* squash: fix typos, reword comments, use object for preventing duplicate broadcasts
The feature was not ported to the new toolbar. Arguable these
can all be moved into notification but for now simply the
logic will be removed and worked on again as demand arised.
Adds TestConnectionInfo component which exposes some internal app state
to the jitsi-meet-torture through the UI accessibility layer. This
component will render only if config.testing.testMode is set to true.
Moves the statsEmitter.start() invocation to the middleware of
the connection-indicator feature, so that it's started for both mobile
and web (now mobile needs RTP stats for the tests).
On Android the files will be copied to the assets/sounds directory of
the SDK bundle on build time. To play the "asset:/" prefix has to be
used to locate the files correctly.
On iOS each sound file must be added to the SDK's Xcode project in order
to be bundled correctly. To playback we need to know the path of the SDK
bundle which is now exposed by the AppInfo iOS module.
TouchableWithoutFeedback and TouchableHighlight interfere with the
implementation of 'pinch to zoom' to come. We prepare for it by driving
the onClick/onPress handler(s) out of Conference, through LargeVideo and
ParticipantView into Video itself where the bulk of 'pinch to zoom' will
be implemented.
In preparation for "pinch to zoom" support in desktop streams on mobile, make
certain Views not intervene in touch event handling. While the modification is
necessary for "pinch to zoom" which is coming later, it really makes sense for
the modified Views to not be involved in touching because they're used to aid
layout and/or animations and are to behave to the user as if they're not there.
Adds Nat64InfoModule which resolves IPv6 addresses for IPv4 addresses
in IPv6 only network where jitsi-meet deployment does not provide any
IPv6 addresses as ICE candidates.