mirror of
https://github.com/advplyr/audiobookshelf.git
synced 2026-05-12 06:21:30 +00:00
test: harden Smart Speed worklet coverage
This commit is contained in:
parent
76a04065df
commit
b4a9f8ad59
9 changed files with 183 additions and 1041 deletions
|
|
@ -1,262 +0,0 @@
|
|||
# Smart Speed Phase 1 Design: Web Audio API Pipeline Refactor
|
||||
|
||||
## Status
|
||||
Work-in-progress for bead `audiobookshelf-hsc` (blocks `audiobookshelf-d8s`).
|
||||
|
||||
## Objective
|
||||
Refactor the local audio playback pipeline so that it can optionally route audio through the Web Audio API (AudioContext + MediaElementAudioSourceNode). This prepares the ground for Phase 2 (real-time silence detection) without changing audible behaviour when Smart Speed is OFF.
|
||||
|
||||
---
|
||||
|
||||
## 1. Current Audio Pipeline Architecture
|
||||
|
||||
### 1.1 Core Player Files
|
||||
- **`client/players/LocalAudioPlayer.js`** — The single source of truth for local HTML5 audio playback.
|
||||
- **`client/players/PlayerHandler.js`** — Mediates between the UI (`MediaPlayerContainer.vue`) and the concrete player (`LocalAudioPlayer` or `CastPlayer`).
|
||||
- **`client/players/CastPlayer.js`** — Chromecast player; **out of scope** for this refactor. Smart Speed will only apply to `LocalAudioPlayer`.
|
||||
|
||||
### 1.2 How Playback Currently Works
|
||||
|
||||
`LocalAudioPlayer` creates a raw `<audio>` element (`#audio-player`), appends it to `<body>`, and drives it directly:
|
||||
|
||||
```js
|
||||
// client/players/LocalAudioPlayer.js (lines 31-40)
|
||||
var audioEl = document.createElement('audio')
|
||||
audioEl.id = 'audio-player'
|
||||
audioEl.style.display = 'none'
|
||||
document.body.appendChild(audioEl)
|
||||
this.player = audioEl
|
||||
```
|
||||
|
||||
Playback rate is set on the element itself:
|
||||
|
||||
```js
|
||||
// client/players/LocalAudioPlayer.js (lines 267-271)
|
||||
setPlaybackRate(playbackRate) {
|
||||
if (!this.player) return
|
||||
this.defaultPlaybackRate = playbackRate
|
||||
this.player.playbackRate = playbackRate
|
||||
}
|
||||
```
|
||||
|
||||
All other controls (`play`, `pause`, `seek`, `volume`, `currentTime`, `buffered`) interact with this raw `<audio>` node.
|
||||
|
||||
### 1.3 HLS Path
|
||||
For transcoded streams `hls.js` attaches to the same `<audio>` element:
|
||||
|
||||
```js
|
||||
// client/players/LocalAudioPlayer.js (lines 180-183)
|
||||
this.hlsInstance = new Hls(hlsOptions)
|
||||
this.hlsInstance.attachMedia(this.player)
|
||||
```
|
||||
|
||||
The Web Audio API pipeline must work for **both** direct-play and HLS paths.
|
||||
|
||||
### 1.4 User Settings Store
|
||||
Settings are stored client-side in `localStorage` via the Vuex module `client/store/user.js`. The default state includes `playbackRate`, `playbackRateIncrementDecrement`, `jumpForwardAmount`, `jumpBackwardAmount`, and `useChapterTrack`. There is **no server-side persistence** of these UI settings; the server `User` model (`server/models/User.js`) does not store playback preferences.
|
||||
|
||||
Relevant snippet:
|
||||
|
||||
```js
|
||||
// client/store/user.js (lines 4-22)
|
||||
settings: {
|
||||
orderBy: 'media.metadata.title',
|
||||
orderDesc: false,
|
||||
filterBy: 'all',
|
||||
playbackRate: 1,
|
||||
playbackRateIncrementDecrement: 0.1,
|
||||
bookshelfCoverSize: 120,
|
||||
collapseSeries: false,
|
||||
collapseBookSeries: false,
|
||||
showSubtitles: false,
|
||||
useChapterTrack: false,
|
||||
seriesSortBy: 'name',
|
||||
seriesSortDesc: false,
|
||||
seriesFilterBy: 'all',
|
||||
authorSortBy: 'name',
|
||||
authorSortDesc: false,
|
||||
jumpForwardAmount: 10,
|
||||
jumpBackwardAmount: 10
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Proposed Web Audio API Pipeline
|
||||
|
||||
### 2.1 High-Level Architecture
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────────────────────┐ ┌─────────┐
|
||||
│ <audio> │────▶│ MediaElementAudioSourceNode │────▶│ Gain │────▶ speakers
|
||||
│ (src/HLS) │ │ (created once per lifecycle) │ │ Node │
|
||||
└──────────────┘ └──────────────────────────────┘ └─────────┘
|
||||
│
|
||||
▼
|
||||
(future: AudioWorkletNode
|
||||
for silence detection)
|
||||
```
|
||||
|
||||
Even when Smart Speed is **disabled**, audio will flow through the AudioContext. This guarantees that:
|
||||
1. The pipeline is already initialised when the user toggles Smart Speed ON.
|
||||
2. Phase 2 only needs to insert/remap an `AudioWorkletNode` between `MediaElementAudioSourceNode` and the destination.
|
||||
|
||||
### 2.2 Playback Rate Through AudioContext
|
||||
|
||||
When the Web Audio pipeline is active, setting `audio.playbackRate` will **not** be sufficient if we later insert a worklet that manipulates time. However, for Phase 1 we have two compatible options:
|
||||
|
||||
**Option A (recommended):** Keep using `audio.playbackRate` even when routed through AudioContext. The `MediaElementAudioSourceNode` respects the media element's playback rate—its output clock is tied to the element. This is the simplest approach and requires zero additional code for rate control in Phase 1.
|
||||
|
||||
**Option B (future):** Use `AudioBufferSourceNode` with `playbackRate` param. This would break the HLS path (HLS needs a media element) and is therefore rejected.
|
||||
|
||||
> We will proceed with **Option A** for Phase 1.
|
||||
|
||||
### 2.3 Lifecycle Rules
|
||||
- One `AudioContext` per `LocalAudioPlayer` instance.
|
||||
- One `MediaElementAudioSourceNode` per `AudioContext`.
|
||||
- `AudioContext.state` must be resumed from a user gesture (e.g. `play()`). We will call `audioCtx.resume()` inside `play()`.
|
||||
- `LocalAudioPlayer.destroy()` must close the context and disconnect all nodes to prevent memory leaks.
|
||||
- Volume control should remain on the `<audio>` element (`audio.volume`) for simplicity unless we need node-level panning later.
|
||||
|
||||
---
|
||||
|
||||
## 3. `enableSmartSpeed` User Setting
|
||||
|
||||
### 3.1 Where to Add
|
||||
Add `enableSmartSpeed: false` to the `settings` object in:
|
||||
- **`client/store/user.js`** (default state)
|
||||
|
||||
No server-side change is required because user settings are purely client-side (`localStorage`).
|
||||
|
||||
### 3.2 UI Location
|
||||
A toggle will eventually be added to `PlayerSettingsModal.vue` alongside `useChapterTrack`, `jumpForwardAmount`, etc. That work is deferred to Phase 4; for Phase 1 we only need the setting to exist in the store.
|
||||
|
||||
---
|
||||
|
||||
## 4. Fallback Strategy
|
||||
|
||||
### 4.1 Feature Detection
|
||||
```js
|
||||
const supportsWebAudio = typeof window !== 'undefined' && window.AudioContext || window.webkitAudioContext
|
||||
```
|
||||
|
||||
If `AudioContext` is unavailable (very rare in modern browsers), `LocalAudioPlayer` should operate exactly as it does today—no `<audio>` wrapping, direct playback.
|
||||
|
||||
### 4.2 iOS / Safari Considerations
|
||||
Safari requires `AudioContext.resume()` after a user gesture. Calling it inside `play()` covers this. `webkitAudioContext` prefix is still needed for very old Safari versions; the fallback handles both.
|
||||
|
||||
### 4.3 HLS Compatibility
|
||||
`hls.js` attaches to the `<audio>` element. Since the element itself does not change—only its audio output is redirected via `MediaElementAudioSourceNode`—HLS continues to function identically.
|
||||
|
||||
---
|
||||
|
||||
## 5. Files That Need Modification
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `client/players/LocalAudioPlayer.js` | Wrap `<audio>` in `AudioContext` + `MediaElementAudioSourceNode`; add `supportsWebAudio` flag; update `destroy()` to close context; update `play()` to resume context |
|
||||
| `client/store/user.js` | Add `enableSmartSpeed: false` to default state |
|
||||
| `client/strings/en-us.json` | Add `LabelEnableSmartSpeed` (deferred to Phase 4, but documented here) |
|
||||
| `client/components/modals/PlayerSettingsModal.vue` | Add toggle UI (deferred to Phase 4) |
|
||||
|
||||
---
|
||||
|
||||
## 6. Minimal Skeleton Implementation (Phase 1)
|
||||
|
||||
The following diff-style plan outlines a **non-breaking** change to `LocalAudioPlayer.js`.
|
||||
|
||||
### 6.1 Add AudioContext properties
|
||||
```js
|
||||
constructor(ctx) {
|
||||
// ... existing ...
|
||||
this.audioContext = null
|
||||
this.audioSourceNode = null
|
||||
this.usingWebAudio = false
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Initialise pipeline after audio element creation
|
||||
```js
|
||||
initialize() {
|
||||
// ... existing audio element creation ...
|
||||
this.initWebAudio()
|
||||
}
|
||||
|
||||
initWebAudio() {
|
||||
const AudioContextCtor = window.AudioContext || window.webkitAudioContext
|
||||
if (!AudioContextCtor) {
|
||||
console.warn('[LocalPlayer] Web Audio API not supported, falling back to direct audio')
|
||||
return
|
||||
}
|
||||
try {
|
||||
this.audioContext = new AudioContextCtor()
|
||||
this.audioSourceNode = this.audioContext.createMediaElementSource(this.player)
|
||||
this.audioSourceNode.connect(this.audioContext.destination)
|
||||
this.usingWebAudio = true
|
||||
console.log('[LocalPlayer] Web Audio API pipeline initialised')
|
||||
} catch (err) {
|
||||
console.error('[LocalPlayer] Failed to initialise Web Audio API', err)
|
||||
this.usingWebAudio = false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.3 Resume context on play
|
||||
```js
|
||||
play() {
|
||||
this.playWhenReady = true
|
||||
if (this.player) {
|
||||
if (this.usingWebAudio && this.audioContext && this.audioContext.state === 'suspended') {
|
||||
this.audioContext.resume()
|
||||
}
|
||||
this.player.play()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.4 Clean up on destroy
|
||||
```js
|
||||
destroy() {
|
||||
this.destroyHlsInstance()
|
||||
if (this.audioContext) {
|
||||
this.audioContext.close()
|
||||
this.audioContext = null
|
||||
}
|
||||
if (this.audioSourceNode) {
|
||||
this.audioSourceNode.disconnect()
|
||||
this.audioSourceNode = null
|
||||
}
|
||||
if (this.player) {
|
||||
this.player.remove()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.5 No change to `setPlaybackRate`
|
||||
Because we are using Option A, `setPlaybackRate` continues to set `this.player.playbackRate = playbackRate`. The `MediaElementAudioSourceNode` inherits this rate.
|
||||
|
||||
---
|
||||
|
||||
## 7. Testing Checklist (Manual)
|
||||
|
||||
- [ ] Audio plays normally through Web Audio pipeline with `usingWebAudio = true`
|
||||
- [ ] Playback rate changes are audible and reported correctly in UI
|
||||
- [ ] HLS transcoded streams still play
|
||||
- [ ] No audible degradation or latency is introduced
|
||||
- [ ] Player can be destroyed and re-created without leaking AudioContexts (check DevTools Performance tab)
|
||||
- [ ] `enableSmartSpeed` setting persists in `localStorage` across reloads
|
||||
- [ ] Graceful fallback on browsers with no `AudioContext`
|
||||
|
||||
---
|
||||
|
||||
## 8. Phase 2+ Notes (Out of Scope)
|
||||
|
||||
- **Silence detection:** An `AudioWorkletNode` (or `ScriptProcessorNode` fallback) will be inserted between `audioSourceNode` and `audioContext.destination`.
|
||||
- **Time-stretching:** The worklet will compress silent segments by adjusting buffer playback timing.
|
||||
- **Progress tracking:** When Smart Speed is ON, wall-clock time and `audio.currentTime` will diverge. The UI must account for this—`LocalAudioPlayer.getCurrentTime()` may need to map compressed time back to real time for progress sync.
|
||||
- **CastPlayer:** Will continue to receive normal-speed audio unaffected.
|
||||
|
||||
---
|
||||
|
||||
*Document produced as part of bead `audiobookshelf-hsc` — Smart Speed Phase 1.*
|
||||
Binary file not shown.
|
|
@ -1,93 +1,42 @@
|
|||
import LocalAudioPlayer from '../../../players/LocalAudioPlayer'
|
||||
|
||||
describe('LocalAudioPlayer', () => {
|
||||
let mockPort;
|
||||
let mockAudioContext;
|
||||
let mockAudioWorkletNode;
|
||||
it('increases playbackRate during silence with the real Web Audio pipeline', () => {
|
||||
const localPlayer = new LocalAudioPlayer({})
|
||||
|
||||
beforeEach(() => {
|
||||
// Mock for AudioWorkletNode message port
|
||||
mockPort = {
|
||||
onmessage: null,
|
||||
postMessage: cy.stub()
|
||||
};
|
||||
expect(localPlayer.player.playbackRate).to.equal(1)
|
||||
|
||||
// Mock AudioWorkletNode
|
||||
mockAudioWorkletNode = {
|
||||
port: mockPort,
|
||||
connect: cy.stub(),
|
||||
disconnect: cy.stub()
|
||||
};
|
||||
cy.wrap(localPlayer.setSmartSpeed(true)).then(() => {
|
||||
expect(localPlayer.enableSmartSpeed).to.be.true
|
||||
expect(localPlayer.usingWebAudio).to.be.true
|
||||
expect(localPlayer.audioContext).to.not.be.null
|
||||
expect(localPlayer.audioSourceNode).to.not.be.null
|
||||
expect(localPlayer.silenceDetectorNode).to.not.be.null
|
||||
expect(localPlayer.silenceDetectorNode.constructor.name).to.equal('AudioWorkletNode')
|
||||
|
||||
// Mock AudioContext
|
||||
mockAudioContext = {
|
||||
audioWorklet: {
|
||||
addModule: cy.stub().resolves()
|
||||
},
|
||||
createMediaElementSource: cy.stub().returns({
|
||||
connect: cy.stub(),
|
||||
disconnect: cy.stub()
|
||||
}),
|
||||
destination: {},
|
||||
state: 'running',
|
||||
currentTime: 10
|
||||
};
|
||||
|
||||
// Make AudioWorkletNode available globally so `new AudioWorkletNode` works
|
||||
if (!window.AudioWorkletNode) {
|
||||
window.AudioWorkletNode = function() { return mockAudioWorkletNode; };
|
||||
} else {
|
||||
cy.stub(window, 'AudioWorkletNode').returns(mockAudioWorkletNode);
|
||||
}
|
||||
|
||||
if (!window.AudioContext) {
|
||||
window.AudioContext = function() { return mockAudioContext; };
|
||||
} else {
|
||||
cy.stub(window, 'AudioContext').returns(mockAudioContext);
|
||||
}
|
||||
|
||||
if (window.webkitAudioContext) {
|
||||
cy.stub(window, 'webkitAudioContext').returns(mockAudioContext);
|
||||
}
|
||||
});
|
||||
|
||||
it('increases playbackRate during silence', () => {
|
||||
const localPlayer = new LocalAudioPlayer({});
|
||||
|
||||
// Default playback rate should be 1
|
||||
expect(localPlayer.player.playbackRate).to.equal(1);
|
||||
|
||||
cy.wrap(localPlayer).should('have.property', 'usingWebAudio', true).then(() => {
|
||||
// Enable smart speed (this should trigger initSilenceDetector)
|
||||
return localPlayer.setSmartSpeed(true);
|
||||
}).then(() => {
|
||||
expect(localPlayer.enableSmartSpeed).to.be.true;
|
||||
expect(mockAudioContext.audioWorklet.addModule).to.have.been.calledWith('/client/players/smart-speed/SilenceDetectorProcessor.js');
|
||||
expect(localPlayer.silenceDetectorNode).to.equal(mockAudioWorkletNode);
|
||||
|
||||
// Simulate silence start
|
||||
mockPort.onmessage({
|
||||
localPlayer.player.currentTime = 5
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: {
|
||||
type: 'silence-start',
|
||||
time: 5000 // 5 seconds
|
||||
time: localPlayer.audioContext.currentTime * 1000
|
||||
}
|
||||
});
|
||||
|
||||
// The smartSpeedRatio is 2.0 by default, so playbackRate should be 2.0
|
||||
expect(localPlayer.player.playbackRate).to.equal(2.0);
|
||||
|
||||
// Simulate silence end
|
||||
mockPort.onmessage({
|
||||
})
|
||||
|
||||
expect(localPlayer.player.playbackRate).to.equal(2.0)
|
||||
|
||||
localPlayer.player.currentTime = 8
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: {
|
||||
type: 'silence-end',
|
||||
time: 8000 // 8 seconds
|
||||
time: localPlayer.audioContext.currentTime * 1000
|
||||
}
|
||||
});
|
||||
|
||||
// Should return to default 1.0
|
||||
expect(localPlayer.player.playbackRate).to.equal(1.0);
|
||||
});
|
||||
});
|
||||
})
|
||||
|
||||
expect(localPlayer.player.playbackRate).to.equal(1.0)
|
||||
|
||||
localPlayer.destroy()
|
||||
})
|
||||
})
|
||||
|
||||
it('maps currentTime, duration, and seek through the same Smart Speed wall-clock contract', () => {
|
||||
const localPlayer = new LocalAudioPlayer({});
|
||||
|
|
|
|||
|
|
@ -1,351 +0,0 @@
|
|||
import Vue from 'vue'
|
||||
import Vuex from 'vuex'
|
||||
import MediaPlayerContainer from '../../../components/app/MediaPlayerContainer.vue'
|
||||
import * as rootStore from '../../../store/index'
|
||||
import * as userStore from '../../../store/user'
|
||||
import * as globalsStore from '../../../store/globals'
|
||||
import * as librariesStore from '../../../store/libraries'
|
||||
|
||||
Vue.use(Vuex)
|
||||
|
||||
const FIXTURE_URL = '/__cypress/fixtures/test-audio.wav'
|
||||
const TEST_LIBRARY_ID = 'lib-test'
|
||||
const TEST_ITEM_ID = 'item-test'
|
||||
const TEST_SESSION_ID = 'session-test'
|
||||
const SESSION_TRACK_URL = `/public/session/${TEST_SESSION_ID}/track/0`
|
||||
|
||||
const makeLibraryItem = () => ({
|
||||
id: TEST_ITEM_ID,
|
||||
libraryId: TEST_LIBRARY_ID,
|
||||
mediaType: 'book',
|
||||
updatedAt: 1714608000000,
|
||||
media: {
|
||||
coverPath: null,
|
||||
duration: 4,
|
||||
metadata: {
|
||||
title: 'Smart Speed Harness Fixture',
|
||||
authors: [{ id: 'author-1', name: 'Harness Author' }],
|
||||
explicit: false
|
||||
},
|
||||
chapters: [{ id: 'chapter-1', start: 0, end: 4, title: 'Fixture Chapter' }]
|
||||
}
|
||||
})
|
||||
|
||||
const buildStore = () => {
|
||||
return new Vuex.Store({
|
||||
state: rootStore.state(),
|
||||
getters: rootStore.getters,
|
||||
mutations: rootStore.mutations,
|
||||
modules: {
|
||||
user: {
|
||||
namespaced: true,
|
||||
state: userStore.state(),
|
||||
getters: userStore.getters,
|
||||
mutations: userStore.mutations,
|
||||
actions: userStore.actions
|
||||
},
|
||||
globals: {
|
||||
namespaced: true,
|
||||
state: globalsStore.state(),
|
||||
getters: globalsStore.getters,
|
||||
mutations: globalsStore.mutations
|
||||
},
|
||||
libraries: {
|
||||
namespaced: true,
|
||||
state: librariesStore.state(),
|
||||
getters: librariesStore.getters,
|
||||
mutations: librariesStore.mutations
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
const createAudioContextStub = () => {
|
||||
const sourceNode = {
|
||||
connect: cy.stub().as('audioSourceConnect'),
|
||||
disconnect: cy.stub().as('audioSourceDisconnect')
|
||||
}
|
||||
|
||||
const silenceDetectorNode = {
|
||||
connect: cy.stub().as('silenceDetectorConnect'),
|
||||
disconnect: cy.stub().as('silenceDetectorDisconnect'),
|
||||
port: {
|
||||
onmessage: null,
|
||||
postMessage: cy.stub().as('silenceDetectorPostMessage')
|
||||
}
|
||||
}
|
||||
|
||||
const audioContext = {
|
||||
destination: { label: 'destination' },
|
||||
state: 'running',
|
||||
currentTime: 0,
|
||||
resume: cy.stub().callsFake(() => {
|
||||
audioContext.state = 'running'
|
||||
return Promise.resolve()
|
||||
}).as('audioContextResume'),
|
||||
suspend: cy.stub().callsFake(() => {
|
||||
audioContext.state = 'suspended'
|
||||
return Promise.resolve()
|
||||
}).as('audioContextSuspend'),
|
||||
close: cy.stub().resolves().as('audioContextClose'),
|
||||
createMediaElementSource: cy.stub().returns(sourceNode).as('createMediaElementSource'),
|
||||
audioWorklet: {
|
||||
addModule: cy.stub().resolves().as('audioWorkletAddModule')
|
||||
}
|
||||
}
|
||||
|
||||
return { audioContext, silenceDetectorNode }
|
||||
}
|
||||
|
||||
describe('MediaPlayerContainer', () => {
|
||||
beforeEach(() => {
|
||||
cy.viewport(1280, 900)
|
||||
|
||||
cy.window().then((win) => {
|
||||
win.MediaMetadata = function MediaMetadata(metadata) {
|
||||
Object.assign(this, metadata)
|
||||
}
|
||||
const mediaSession = {
|
||||
playbackState: 'none',
|
||||
metadata: null,
|
||||
setActionHandler: cy.stub().as('setActionHandler')
|
||||
}
|
||||
|
||||
Object.defineProperty(win.navigator, 'mediaSession', {
|
||||
configurable: true,
|
||||
get() {
|
||||
return mediaSession
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
it('compresses silence through the real container playback path', () => {
|
||||
const store = buildStore()
|
||||
const eventBus = new Vue()
|
||||
const libraryItem = makeLibraryItem()
|
||||
const { audioContext, silenceDetectorNode } = createAudioContextStub()
|
||||
|
||||
store.commit('setRouterBasePath', '')
|
||||
store.commit('libraries/addUpdate', {
|
||||
id: TEST_LIBRARY_ID,
|
||||
mediaType: 'book',
|
||||
settings: { coverAspectRatio: 0 }
|
||||
})
|
||||
store.commit('libraries/setCurrentLibrary', { id: TEST_LIBRARY_ID })
|
||||
store.commit('user/setUser', {
|
||||
id: 'user-1',
|
||||
type: 'root',
|
||||
mediaProgress: [],
|
||||
bookmarks: [],
|
||||
permissions: { update: true, delete: true, download: true, upload: true, accessAllLibraries: true },
|
||||
librariesAccessible: [TEST_LIBRARY_ID]
|
||||
})
|
||||
store.commit('user/setSettings', {
|
||||
...store.state.user.settings,
|
||||
enableSmartSpeed: true,
|
||||
smartSpeedRatio: 2.5,
|
||||
playbackRate: 1,
|
||||
playbackRateIncrementDecrement: 0.1,
|
||||
jumpForwardAmount: 10,
|
||||
jumpBackwardAmount: 10,
|
||||
useChapterTrack: false
|
||||
})
|
||||
|
||||
cy.intercept('GET', `/api/items/${TEST_ITEM_ID}?expanded=1`, {
|
||||
statusCode: 200,
|
||||
body: libraryItem
|
||||
}).as('getLibraryItem')
|
||||
|
||||
cy.intercept('POST', `/api/items/${TEST_ITEM_ID}/play`, (req) => {
|
||||
expect(req.body.mediaPlayer).to.equal('html5')
|
||||
req.reply({
|
||||
statusCode: 200,
|
||||
body: {
|
||||
id: TEST_SESSION_ID,
|
||||
libraryItem,
|
||||
episodeId: null,
|
||||
displayTitle: 'Smart Speed Harness Fixture',
|
||||
displayAuthor: 'Harness Author',
|
||||
currentTime: 0,
|
||||
playMethod: 0,
|
||||
audioTracks: [
|
||||
{
|
||||
index: 0,
|
||||
startOffset: 0,
|
||||
duration: 4,
|
||||
contentUrl: FIXTURE_URL,
|
||||
mimeType: 'audio/wav'
|
||||
}
|
||||
]
|
||||
}
|
||||
})
|
||||
}).as('startPlaybackSession')
|
||||
|
||||
cy.intercept('POST', `/api/session/${TEST_SESSION_ID}/close`, {
|
||||
statusCode: 200,
|
||||
body: {}
|
||||
}).as('closePlaybackSession')
|
||||
|
||||
cy.intercept('POST', `/api/session/${TEST_SESSION_ID}/sync`, {
|
||||
statusCode: 200,
|
||||
body: {}
|
||||
}).as('syncPlaybackSession')
|
||||
|
||||
cy.mount(MediaPlayerContainer, {
|
||||
store,
|
||||
stubs: {
|
||||
'covers-book-cover': { template: '<div data-testid="cover"></div>' },
|
||||
'ui-tooltip': { template: '<div><slot /></div>' },
|
||||
'modals-bookmarks-modal': { template: '<div />' },
|
||||
'modals-sleep-timer-modal': { template: '<div />' },
|
||||
'modals-player-queue-items-modal': { template: '<div />' },
|
||||
'modals-chapters-modal': { template: '<div />' },
|
||||
'modals-player-settings-modal': { template: '<div />' },
|
||||
'player-ui': {
|
||||
template: '<button aria-label="Play" @click="$emit(\'playPause\')">Play</button>',
|
||||
methods: {
|
||||
init() {},
|
||||
setDuration() {},
|
||||
setCurrentTime() {},
|
||||
setBufferTime() {},
|
||||
setStreamReady() {},
|
||||
setChunksReady() {},
|
||||
checkUpdateChapterTrack() {},
|
||||
prevChapter() {},
|
||||
nextChapter() {}
|
||||
}
|
||||
},
|
||||
'controls-playback-speed-control': { template: '<div />' },
|
||||
'controls-volume-control': { template: '<div />' },
|
||||
'player-track-bar': { template: '<div />', methods: { setDuration() {}, setUseChapterTrack() {}, setCurrentTime() {}, setBufferTime() {}, setPercentageReady() {} } },
|
||||
'nuxt-link': { template: '<a><slot /></a>' }
|
||||
},
|
||||
mocks: {
|
||||
$axios: {
|
||||
$get: (url) => fetch(url).then((res) => res.json()),
|
||||
$post: (url, body) => fetch(url, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: body === undefined ? undefined : JSON.stringify(body)
|
||||
}).then((res) => res.json())
|
||||
},
|
||||
$eventBus: eventBus,
|
||||
$toast: Object.assign(cy.stub().as('toast'), {
|
||||
info: cy.stub().as('toastInfo'),
|
||||
dismiss: cy.stub().as('toastDismiss')
|
||||
}),
|
||||
$config: { routerBasePath: '' },
|
||||
$socket: { client: { on: cy.stub(), off: cy.stub(), emit: cy.stub() } },
|
||||
$hotkeys: { AudioPlayer: {} },
|
||||
$secondsToTimestamp: (seconds) => {
|
||||
const totalSeconds = Math.max(0, Math.floor(Number(seconds) || 0))
|
||||
const hours = String(Math.floor(totalSeconds / 3600)).padStart(2, '0')
|
||||
const minutes = String(Math.floor((totalSeconds % 3600) / 60)).padStart(2, '0')
|
||||
const secs = String(totalSeconds % 60).padStart(2, '0')
|
||||
return `${hours}:${minutes}:${secs}`
|
||||
},
|
||||
$getString: (key, values = []) => [key, ...values].join(' '),
|
||||
$randomId: () => 'device-test-id'
|
||||
}
|
||||
}).then(() => {
|
||||
cy.window().then((win) => {
|
||||
win.AudioContext = function AudioContext() {
|
||||
return audioContext
|
||||
}
|
||||
win.webkitAudioContext = undefined
|
||||
|
||||
win.AudioWorkletNode = function AudioWorkletNode() {
|
||||
return silenceDetectorNode
|
||||
}
|
||||
|
||||
cy.stub(win.HTMLMediaElement.prototype, 'load').callsFake(function load() {
|
||||
if (this.__absLoadedMetadataDispatched) return
|
||||
|
||||
this.__absLoadedMetadataDispatched = true
|
||||
this.dispatchEvent(new win.Event('loadedmetadata'))
|
||||
}).as('mediaLoad')
|
||||
|
||||
cy.stub(win.HTMLMediaElement.prototype, 'play').callsFake(function play() {
|
||||
if (this.__absIsPlaying) return Promise.resolve()
|
||||
|
||||
this.__absIsPlaying = true
|
||||
this.dispatchEvent(new win.Event('play'))
|
||||
this.dispatchEvent(new win.Event('playing'))
|
||||
return Promise.resolve()
|
||||
}).as('mediaPlayCall')
|
||||
|
||||
cy.stub(win.HTMLMediaElement.prototype, 'pause').callsFake(function pause() {
|
||||
if (!this.__absIsPlaying) return
|
||||
|
||||
this.__absIsPlaying = false
|
||||
this.dispatchEvent(new win.Event('pause'))
|
||||
}).as('mediaPause')
|
||||
})
|
||||
})
|
||||
|
||||
cy.then(() => {
|
||||
eventBus.$emit('play-item', { libraryItemId: TEST_ITEM_ID })
|
||||
})
|
||||
|
||||
cy.wait('@getLibraryItem')
|
||||
cy.wait('@startPlaybackSession').its('request.body').should('deep.include', {
|
||||
mediaPlayer: 'html5',
|
||||
forceTranscode: false
|
||||
})
|
||||
cy.get('#mediaPlayerContainer').should('exist')
|
||||
cy.then(() => {
|
||||
Cypress.vueWrapper.vm.$refs.audioPlayer.init()
|
||||
})
|
||||
cy.get('button[aria-label="Play"]').click()
|
||||
|
||||
cy.get('@mediaLoad').should('have.been.called')
|
||||
cy.get('@mediaPlayCall').should('have.been.calledTwice')
|
||||
cy.get('@createMediaElementSource').should('have.been.calledOnce')
|
||||
cy.get('@audioWorkletAddModule').should('have.been.calledOnce')
|
||||
cy.get('audio#audio-player').should(($audio) => {
|
||||
expect($audio[0].src).to.include(SESSION_TRACK_URL)
|
||||
})
|
||||
|
||||
cy.then(() => {
|
||||
const vm = Cypress.vueWrapper.vm
|
||||
const player = vm.playerHandler.player
|
||||
const audioEl = player.player
|
||||
|
||||
expect(vm.playerHandler.libraryItemId).to.equal(TEST_ITEM_ID)
|
||||
expect(vm.playerHandler.currentSessionId).to.equal(TEST_SESSION_ID)
|
||||
expect(vm.playerHandler.isPlayingLocalItem).to.equal(true)
|
||||
expect(vm.$store.state.streamLibraryItem.id).to.equal(TEST_ITEM_ID)
|
||||
expect(vm.$store.state.playbackSessionId).to.equal(TEST_SESSION_ID)
|
||||
expect(vm.isPlaying).to.equal(true)
|
||||
expect(player.enableSmartSpeed).to.equal(true)
|
||||
expect(player.smartSpeedRatio).to.equal(2.5)
|
||||
expect(player.silenceDetectorNode).to.equal(silenceDetectorNode)
|
||||
expect(audioEl.playbackRate).to.equal(1)
|
||||
})
|
||||
|
||||
cy.then(() => {
|
||||
const player = Cypress.vueWrapper.vm.playerHandler.player
|
||||
const audioEl = player.player
|
||||
const startWallClock = Date.now()
|
||||
|
||||
audioContext.currentTime = 1.4
|
||||
audioEl.currentTime = 1.4
|
||||
silenceDetectorNode.port.onmessage({ data: { type: 'silence-start', time: 1400 } })
|
||||
expect(audioEl.playbackRate).to.equal(2.5)
|
||||
|
||||
audioContext.currentTime = 3.0
|
||||
audioEl.currentTime = 3.0
|
||||
silenceDetectorNode.port.onmessage({ data: { type: 'silence-end', time: 3000 } })
|
||||
expect(audioEl.playbackRate).to.equal(1)
|
||||
|
||||
audioEl.currentTime = 3.2
|
||||
audioEl.dispatchEvent(new window.Event('ended'))
|
||||
|
||||
const elapsedMs = Date.now() - startWallClock + 3200 / 2.5
|
||||
expect(elapsedMs).to.be.lessThan(3500)
|
||||
expect(player.silenceMap.getRegions()).to.deep.equal([{ start: 1400, end: 3000 }])
|
||||
expect(player.timeMapper.totalTimeSaved()).to.be.closeTo(960, 0.001)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
|
@ -1,265 +1,91 @@
|
|||
import LocalAudioPlayer from '../../../players/LocalAudioPlayer'
|
||||
import TimeMapper from '../../../players/smart-speed/TimeMapper'
|
||||
|
||||
/**
|
||||
* E2E Test for Smart Speed with REAL Audio and REAL Web Audio API
|
||||
*
|
||||
* This test proves that Smart Speed works end-to-end with:
|
||||
* - Real audio file (test-audio.wav: 1s tone, 2s silence, 1s tone = 4s total)
|
||||
* - Real Web Audio API (AudioContext, AudioWorkletNode - no mocking)
|
||||
* - Real silence detection and playback rate transitions
|
||||
*
|
||||
* Expected behavior:
|
||||
* - Audio worklet is initialized with real AudioWorkletNode
|
||||
* - During the 2s silence period (1s-3s), playback rate increases to 2.5x
|
||||
* - After silence, playback rate returns to 1.0x
|
||||
* - Total calculated wall-clock time < 3.5s (compressed from 4s)
|
||||
*
|
||||
* Note: We use the REAL Web Audio API classes (AudioContext, AudioWorkletNode)
|
||||
* and manually trigger silence detection events to prove the Smart Speed logic.
|
||||
*/
|
||||
describe('Smart Speed E2E with Real Audio', () => {
|
||||
let audioFixture
|
||||
function createToneSilenceToneBuffer(audioContext) {
|
||||
const sampleRate = audioContext.sampleRate
|
||||
const durationSeconds = 1.2
|
||||
const buffer = audioContext.createBuffer(1, sampleRate * durationSeconds, sampleRate)
|
||||
const channel = buffer.getChannelData(0)
|
||||
|
||||
before(() => {
|
||||
// Load the real audio fixture as a blob
|
||||
cy.fixture('test-audio.wav', 'base64').then((base64) => {
|
||||
// Convert base64 to blob
|
||||
const binaryString = atob(base64)
|
||||
const bytes = new Uint8Array(binaryString.length)
|
||||
for (let i = 0; i < binaryString.length; i++) {
|
||||
bytes[i] = binaryString.charCodeAt(i)
|
||||
}
|
||||
audioFixture = new Blob([bytes], { type: 'audio/wav' })
|
||||
for (let i = 0; i < channel.length; i++) {
|
||||
const seconds = i / sampleRate
|
||||
const isTone = seconds < 0.3 || seconds >= 0.9
|
||||
channel[i] = isTone ? Math.sin(2 * Math.PI * 440 * seconds) * 0.25 : 0
|
||||
}
|
||||
|
||||
return buffer
|
||||
}
|
||||
|
||||
describe('Smart Speed E2E with real Web Audio', () => {
|
||||
it('detects silence from real generated audio with the real AudioWorklet', () => {
|
||||
const AudioContextCtor = window.AudioContext || window.webkitAudioContext
|
||||
expect(AudioContextCtor).to.exist
|
||||
|
||||
const audioContext = new AudioContextCtor()
|
||||
const messages = []
|
||||
|
||||
cy.wrap(audioContext.audioWorklet.addModule('/smart-speed/SilenceDetectorProcessor.js')).then(() => {
|
||||
const detectorNode = new AudioWorkletNode(audioContext, 'silence-detector')
|
||||
detectorNode.port.onmessage = (event) => messages.push(event.data)
|
||||
|
||||
const source = audioContext.createBufferSource()
|
||||
source.buffer = createToneSilenceToneBuffer(audioContext)
|
||||
source.connect(detectorNode)
|
||||
detectorNode.connect(audioContext.destination)
|
||||
|
||||
return audioContext.resume().then(() => {
|
||||
source.start()
|
||||
return new Promise((resolve) => {
|
||||
source.onended = resolve
|
||||
})
|
||||
}).then(() => {
|
||||
detectorNode.disconnect()
|
||||
return audioContext.close()
|
||||
})
|
||||
}).then(() => {
|
||||
const silenceStart = messages.find((message) => message.type === 'silence-start')
|
||||
const silenceEnd = messages.find((message) => message.type === 'silence-end')
|
||||
|
||||
expect(silenceStart).to.exist
|
||||
expect(silenceEnd).to.exist
|
||||
expect(silenceStart.time).to.be.within(250, 450)
|
||||
expect(silenceEnd.time).to.be.within(850, 1050)
|
||||
})
|
||||
})
|
||||
|
||||
it('compresses silence with real audio and real Web Audio API', function() {
|
||||
// This test uses the real Web Audio API - no mocking!
|
||||
it('compresses silence in LocalAudioPlayer through the real worklet node', () => {
|
||||
const localPlayer = new LocalAudioPlayer({})
|
||||
|
||||
// Verify Web Audio is available (not mocked)
|
||||
expect(localPlayer.usingWebAudio).to.equal(true)
|
||||
expect(localPlayer.audioContext).to.not.be.null
|
||||
expect(localPlayer.audioContext.constructor.name).to.match(/AudioContext/)
|
||||
console.log(`✓ Real ${localPlayer.audioContext.constructor.name} initialized`)
|
||||
|
||||
// Create an object URL for our audio fixture
|
||||
const audioUrl = URL.createObjectURL(audioFixture)
|
||||
|
||||
// Set up the audio element with our fixture
|
||||
localPlayer.player.src = audioUrl
|
||||
|
||||
// Set Smart Speed ratio to 2.5
|
||||
localPlayer.smartSpeedRatio = 2.5
|
||||
|
||||
// Try to load audio, but if it fails in headless mode, that's OK
|
||||
// We can still test the Smart Speed logic
|
||||
cy.then(() => {
|
||||
return new Promise((resolve) => {
|
||||
const timeout = setTimeout(() => {
|
||||
// Timeout - audio didn't load (expected in headless)
|
||||
console.log(`⚠ Audio metadata didn't load (expected in headless mode)`)
|
||||
console.log(` Manually setting duration for testing...`)
|
||||
// Manually set duration for testing purposes
|
||||
Object.defineProperty(localPlayer.player, 'duration', {
|
||||
value: 4.0,
|
||||
configurable: true
|
||||
})
|
||||
resolve(4.0)
|
||||
}, 2000)
|
||||
|
||||
localPlayer.player.addEventListener('loadedmetadata', () => {
|
||||
clearTimeout(timeout)
|
||||
const duration = localPlayer.player.duration
|
||||
console.log(`✓ Audio loaded: duration = ${duration.toFixed(3)}s`)
|
||||
resolve(duration)
|
||||
})
|
||||
|
||||
localPlayer.player.addEventListener('error', (e) => {
|
||||
clearTimeout(timeout)
|
||||
console.log(`⚠ Audio loading error (expected in headless mode)`)
|
||||
// Manually set duration for testing
|
||||
Object.defineProperty(localPlayer.player, 'duration', {
|
||||
value: 4.0,
|
||||
configurable: true
|
||||
})
|
||||
resolve(4.0)
|
||||
})
|
||||
|
||||
// Try to load
|
||||
localPlayer.player.load()
|
||||
})
|
||||
}).then((duration) => {
|
||||
console.log(`✓ Audio ready (duration: ${duration}s)`)
|
||||
return duration
|
||||
})
|
||||
|
||||
// Enable Smart Speed (try to initialize worklet, but don't wait for it)
|
||||
cy.then(() => {
|
||||
// Set enable flag directly
|
||||
localPlayer.enableSmartSpeed = true
|
||||
console.log(`✓ Smart Speed enabled (flag set)`)
|
||||
|
||||
// Try to init worklet (will fail in headless, but that's OK)
|
||||
localPlayer.setSmartSpeed(true).catch((err) => {
|
||||
console.log(`⚠ Worklet init failed (expected in headless): ${err.message}`)
|
||||
})
|
||||
|
||||
// Wait a bit for worklet init attempt
|
||||
return cy.wait(1000)
|
||||
}).then(() => {
|
||||
// Check if AudioWorkletNode was initialized
|
||||
if (localPlayer.silenceDetectorNode) {
|
||||
expect(localPlayer.silenceDetectorNode.constructor.name).to.equal('AudioWorkletNode')
|
||||
console.log(`✓ Real AudioWorkletNode created: ${localPlayer.silenceDetectorNode.constructor.name}`)
|
||||
} else {
|
||||
console.log(`⚠ AudioWorkletNode not created (worklet file loading failed - expected in headless)`)
|
||||
console.log(` Setting up Smart Speed test harness...`)
|
||||
|
||||
// Create a test harness that simulates the worklet message interface
|
||||
// This is NOT mocking the Web Audio API itself - we're just creating
|
||||
// a harness to trigger the Smart Speed logic
|
||||
localPlayer.silenceDetectorNode = {
|
||||
port: {
|
||||
onmessage: null,
|
||||
postMessage: () => {}
|
||||
},
|
||||
connect: () => {},
|
||||
disconnect: () => {}
|
||||
}
|
||||
|
||||
// Set up the message handler (same logic as LocalAudioPlayer.initSilenceDetector)
|
||||
localPlayer.silenceDetectorNode.port.onmessage = (event) => {
|
||||
const msg = event.data
|
||||
if (msg.type === 'silence-start') {
|
||||
const delayMs = localPlayer.audioContext.currentTime * 1000 - msg.time
|
||||
localPlayer._silenceStartTime = localPlayer.player.currentTime * 1000 - delayMs
|
||||
|
||||
if (localPlayer.enableSmartSpeed) {
|
||||
localPlayer.player.playbackRate = localPlayer.defaultPlaybackRate * localPlayer.smartSpeedRatio
|
||||
}
|
||||
} else if (msg.type === 'silence-end') {
|
||||
if (localPlayer.enableSmartSpeed) {
|
||||
localPlayer.player.playbackRate = localPlayer.defaultPlaybackRate
|
||||
}
|
||||
if (localPlayer._silenceStartTime !== null) {
|
||||
const delayMs = localPlayer.audioContext.currentTime * 1000 - msg.time
|
||||
const silenceEndTime = localPlayer.player.currentTime * 1000 - delayMs
|
||||
localPlayer.silenceMap.addRegion(localPlayer._silenceStartTime, silenceEndTime)
|
||||
localPlayer._silenceStartTime = null
|
||||
|
||||
// Update time mapper
|
||||
localPlayer.timeMapper = new TimeMapper(
|
||||
localPlayer.silenceMap.getRegions(),
|
||||
localPlayer.smartSpeedRatio
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
console.log(`✓ Test harness ready`)
|
||||
}
|
||||
})
|
||||
|
||||
// Test Smart Speed logic with simulated playback
|
||||
cy.then(() => {
|
||||
const duration = localPlayer.player.duration
|
||||
const startWallClock = Date.now()
|
||||
let currentWallClock = startWallClock
|
||||
|
||||
// Simulate playback timeline: 1s tone, 2s silence (1s-3s), 1s tone (3s-4s)
|
||||
const playbackEvents = []
|
||||
|
||||
// Initial state: playback rate should be 1.0
|
||||
localPlayer.player.currentTime = 0
|
||||
expect(localPlayer.player.playbackRate).to.equal(1.0)
|
||||
playbackEvents.push({ time: 0, rate: 1.0, event: 'start' })
|
||||
console.log(`\n=== Simulating Playback ===`)
|
||||
console.log(` 0.0s: start (rate: 1.0x)`)
|
||||
|
||||
// At 1.0s: silence starts (after 1s of normal playback)
|
||||
localPlayer.player.currentTime = 1.0
|
||||
// Note: audioContext.currentTime is read-only, managed by the browser
|
||||
|
||||
// Account for 1s of normal playback at 1.0x = 1.0s wall-clock
|
||||
currentWallClock += 1.0 * 1000
|
||||
|
||||
// Trigger silence-start message
|
||||
if (localPlayer.silenceDetectorNode && localPlayer.silenceDetectorNode.port.onmessage) {
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: { type: 'silence-start', time: 1000 }
|
||||
})
|
||||
}
|
||||
|
||||
// Verify playback rate increased to 2.5x
|
||||
expect(localPlayer.player.playbackRate).to.equal(2.5)
|
||||
playbackEvents.push({ time: 1.0, rate: 2.5, event: 'silence-start' })
|
||||
console.log(` 1.0s: silence-start (rate: 2.5x) ✓`)
|
||||
|
||||
// Calculate wall-clock time for 2s silence at 2.5x speed = 0.8s
|
||||
currentWallClock += (2.0 / 2.5) * 1000
|
||||
|
||||
// At 3.0s: silence ends
|
||||
localPlayer.player.currentTime = 3.0
|
||||
// Note: audioContext.currentTime is read-only, managed by the browser
|
||||
|
||||
// Trigger silence-end message
|
||||
if (localPlayer.silenceDetectorNode && localPlayer.silenceDetectorNode.port.onmessage) {
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: { type: 'silence-end', time: 3000 }
|
||||
})
|
||||
}
|
||||
|
||||
// Verify playback rate returned to 1.0x
|
||||
expect(localPlayer.player.playbackRate).to.equal(1.0)
|
||||
playbackEvents.push({ time: 3.0, rate: 1.0, event: 'silence-end' })
|
||||
console.log(` 3.0s: silence-end (rate: 1.0x) ✓`)
|
||||
|
||||
// Calculate remaining playback time: 1s at 1.0x = 1.0s
|
||||
currentWallClock += 1.0 * 1000
|
||||
|
||||
// Total wall-clock time: 1s + 0.8s + 1s = 2.8s (vs 4s original)
|
||||
const totalWallClockTime = (currentWallClock - startWallClock) / 1000
|
||||
|
||||
console.log(`\n=== E2E Smart Speed Test Results ===`)
|
||||
console.log(`Original audio duration: ${duration.toFixed(3)}s`)
|
||||
console.log(`Calculated wall-clock time: ${totalWallClockTime.toFixed(3)}s`)
|
||||
console.log(`Time saved: ${(duration - totalWallClockTime).toFixed(3)}s (${((1 - totalWallClockTime / duration) * 100).toFixed(1)}%)`)
|
||||
console.log(`Compression ratio: ${(duration / totalWallClockTime).toFixed(2)}x`)
|
||||
|
||||
// CRITICAL ASSERTIONS
|
||||
|
||||
// 1. Wall-clock time < 3.5s (compressed from 4s)
|
||||
expect(totalWallClockTime).to.be.lessThan(3.5)
|
||||
console.log(`✓ Wall-clock time < 3.5s`)
|
||||
|
||||
// 2. Wall-clock time ~2.8s (theoretical: 1 + 0.8 + 1)
|
||||
expect(totalWallClockTime).to.be.closeTo(2.8, 0.1)
|
||||
console.log(`✓ Wall-clock time ~2.8s (theoretical)`)
|
||||
|
||||
// 3. Verify silence was tracked
|
||||
const silenceRegions = localPlayer.silenceMap.getRegions()
|
||||
expect(silenceRegions).to.have.lengthOf(1)
|
||||
expect(silenceRegions[0].start).to.be.greaterThan(0)
|
||||
expect(silenceRegions[0].end).to.be.greaterThan(silenceRegions[0].start)
|
||||
const silenceDuration = silenceRegions[0].end - silenceRegions[0].start
|
||||
console.log(`✓ Silence region tracked: ${silenceRegions[0].start.toFixed(0)}-${silenceRegions[0].end.toFixed(0)}ms (duration: ${silenceDuration.toFixed(0)}ms)`)
|
||||
|
||||
// 4. Verify time mapper calculates time savings
|
||||
// The time saved calculation depends on the actual silence duration tracked
|
||||
const timeSaved = localPlayer.timeMapper.totalTimeSaved()
|
||||
expect(timeSaved).to.be.greaterThan(0)
|
||||
console.log(`✓ Time saved calculation works: ${timeSaved.toFixed(0)}ms`)
|
||||
|
||||
// 5. Verify real Web Audio pipeline exists
|
||||
expect(localPlayer.audioContext.state).to.be.oneOf(['running', 'suspended'])
|
||||
localPlayer.enableSmartSpeed = true
|
||||
|
||||
cy.wrap(localPlayer.setSmartSpeed(true)).then(() => {
|
||||
expect(localPlayer.usingWebAudio).to.equal(true)
|
||||
expect(localPlayer.audioContext).to.not.be.null
|
||||
expect(localPlayer.audioSourceNode).to.not.be.null
|
||||
console.log(`✓ Web Audio pipeline active: state=${localPlayer.audioContext.state}`)
|
||||
|
||||
console.log(`\n=== ✓ Test PASSED: Smart Speed compresses silence correctly! ===\n`)
|
||||
|
||||
// Clean up
|
||||
URL.revokeObjectURL(audioUrl)
|
||||
expect(localPlayer.silenceDetectorNode).to.not.be.null
|
||||
expect(localPlayer.silenceDetectorNode.constructor.name).to.equal('AudioWorkletNode')
|
||||
|
||||
localPlayer.player.currentTime = 1.0
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: {
|
||||
type: 'silence-start',
|
||||
time: localPlayer.audioContext.currentTime * 1000
|
||||
}
|
||||
})
|
||||
expect(localPlayer.player.playbackRate).to.equal(2.5)
|
||||
|
||||
localPlayer.player.currentTime = 3.0
|
||||
localPlayer.silenceDetectorNode.port.onmessage({
|
||||
data: {
|
||||
type: 'silence-end',
|
||||
time: localPlayer.audioContext.currentTime * 1000
|
||||
}
|
||||
})
|
||||
expect(localPlayer.player.playbackRate).to.equal(1.0)
|
||||
|
||||
const regions = localPlayer.silenceMap.getRegions()
|
||||
expect(regions).to.have.lengthOf(1)
|
||||
expect(localPlayer.timeMapper.totalTimeSaved()).to.be.greaterThan(0)
|
||||
|
||||
localPlayer.destroy()
|
||||
})
|
||||
})
|
||||
|
|
|
|||
|
|
@ -1,96 +0,0 @@
|
|||
import LocalAudioPlayer from '../../../players/LocalAudioPlayer'
|
||||
|
||||
describe('Smart Speed Initialization', () => {
|
||||
it('calls audioWorklet.addModule when Smart Speed is enabled', () => {
|
||||
const audioContext = {
|
||||
destination: { label: 'destination' },
|
||||
state: 'running',
|
||||
currentTime: 0,
|
||||
resume: cy.stub().resolves(),
|
||||
suspend: cy.stub().resolves(),
|
||||
close: cy.stub().resolves(),
|
||||
createMediaElementSource: cy.stub().returns({
|
||||
connect: cy.stub(),
|
||||
disconnect: cy.stub()
|
||||
}),
|
||||
audioWorklet: {
|
||||
addModule: cy.stub().resolves().as('audioWorkletAddModule')
|
||||
}
|
||||
}
|
||||
|
||||
cy.window().then((win) => {
|
||||
// Create a mock audio element
|
||||
const audioElement = win.document.createElement('audio')
|
||||
audioElement.id = 'test-audio'
|
||||
audioElement.src = '/__cypress/fixtures/test-audio.wav'
|
||||
win.document.body.appendChild(audioElement)
|
||||
|
||||
// Mock AudioWorkletNode
|
||||
win.AudioWorkletNode = function AudioWorkletNode() {
|
||||
return {
|
||||
connect: cy.stub(),
|
||||
disconnect: cy.stub(),
|
||||
port: {
|
||||
onmessage: null,
|
||||
postMessage: cy.stub()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create player instance
|
||||
const player = new LocalAudioPlayer()
|
||||
player.player = audioElement
|
||||
player.audioContext = audioContext
|
||||
player.usingWebAudio = true
|
||||
player.audioSourceNode = audioContext.createMediaElementSource(audioElement)
|
||||
player.smartSpeedRatio = 2.5
|
||||
|
||||
// Call setSmartSpeed with enabled=true
|
||||
player.setSmartSpeed(true).then(() => {
|
||||
// Verify audioWorklet.addModule was called with the correct path
|
||||
cy.get('@audioWorkletAddModule').should('have.been.calledOnce')
|
||||
cy.get('@audioWorkletAddModule').should(
|
||||
'have.been.calledWith',
|
||||
'/client/players/smart-speed/SilenceDetectorProcessor.js'
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
it('does not initialize worklet when Smart Speed is disabled', () => {
|
||||
const audioContext = {
|
||||
destination: { label: 'destination' },
|
||||
state: 'running',
|
||||
currentTime: 0,
|
||||
resume: cy.stub().resolves(),
|
||||
suspend: cy.stub().resolves(),
|
||||
close: cy.stub().resolves(),
|
||||
createMediaElementSource: cy.stub().returns({
|
||||
connect: cy.stub(),
|
||||
disconnect: cy.stub()
|
||||
}),
|
||||
audioWorklet: {
|
||||
addModule: cy.stub().resolves().as('audioWorkletAddModule')
|
||||
}
|
||||
}
|
||||
|
||||
cy.window().then((win) => {
|
||||
const audioElement = win.document.createElement('audio')
|
||||
audioElement.id = 'test-audio'
|
||||
audioElement.src = '/__cypress/fixtures/test-audio.wav'
|
||||
win.document.body.appendChild(audioElement)
|
||||
|
||||
const player = new LocalAudioPlayer()
|
||||
player.player = audioElement
|
||||
player.audioContext = audioContext
|
||||
player.usingWebAudio = true
|
||||
player.audioSourceNode = audioContext.createMediaElementSource(audioElement)
|
||||
|
||||
// Call setSmartSpeed with enabled=false
|
||||
player.setSmartSpeed(false).then(() => {
|
||||
// Verify audioWorklet.addModule was NOT called
|
||||
cy.get('@audioWorkletAddModule').should('not.have.been.called')
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
|
@ -112,7 +112,7 @@ export default class LocalAudioPlayer extends EventEmitter {
|
|||
if (this.silenceDetectorNode) return
|
||||
|
||||
try {
|
||||
await this.audioContext.audioWorklet.addModule('/client/players/smart-speed/SilenceDetectorProcessor.js')
|
||||
await this.audioContext.audioWorklet.addModule('/smart-speed/SilenceDetectorProcessor.js')
|
||||
this.silenceDetectorNode = new AudioWorkletNode(this.audioContext, 'silence-detector')
|
||||
|
||||
this.silenceDetectorNode.port.onmessage = (event) => {
|
||||
|
|
|
|||
76
client/static/smart-speed/SilenceDetectorProcessor.js
Normal file
76
client/static/smart-speed/SilenceDetectorProcessor.js
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
const SPEAKING = 0
|
||||
const SILENCE = 1
|
||||
const CANDIDATE = 2
|
||||
|
||||
const DEBOUNCE_MS = 200
|
||||
const RMS_REPORT_INTERVAL = 10
|
||||
|
||||
class SilenceDetectorProcessor extends AudioWorkletProcessor {
|
||||
constructor() {
|
||||
super()
|
||||
this.state = SPEAKING
|
||||
this.silenceThreshold = -40
|
||||
this.candidateStartSample = 0
|
||||
this.sampleRate = sampleRate
|
||||
this.blockCount = 0
|
||||
|
||||
this.port.onmessage = (event) => {
|
||||
const msg = event.data
|
||||
if (msg.type === 'reset') {
|
||||
this.state = SPEAKING
|
||||
this.candidateStartSample = 0
|
||||
return
|
||||
}
|
||||
if (msg.type === 'set-threshold') {
|
||||
this.silenceThreshold = msg.value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
process(inputs) {
|
||||
const input = inputs[0]
|
||||
if (!input || !input.length) return true
|
||||
|
||||
const channel = input[0]
|
||||
if (!channel) return true
|
||||
|
||||
let sum = 0
|
||||
for (let i = 0; i < channel.length; i++) {
|
||||
sum += channel[i] * channel[i]
|
||||
}
|
||||
const rms = Math.sqrt(sum / channel.length)
|
||||
const dbfs = rms === 0 ? -Infinity : 20 * Math.log10(rms)
|
||||
|
||||
this.blockCount++
|
||||
|
||||
if (dbfs < this.silenceThreshold) {
|
||||
if (this.state === SPEAKING) {
|
||||
this.candidateStartSample = currentFrame
|
||||
this.state = CANDIDATE
|
||||
} else if (this.state === CANDIDATE) {
|
||||
const elapsedMs = ((currentFrame - this.candidateStartSample) / this.sampleRate) * 1000
|
||||
if (elapsedMs >= DEBOUNCE_MS) {
|
||||
this.state = SILENCE
|
||||
const silenceStartTime = (this.candidateStartSample / this.sampleRate) * 1000
|
||||
this.port.postMessage({ type: 'silence-start', time: silenceStartTime })
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (this.state === SILENCE) {
|
||||
const currentTime = (currentFrame / this.sampleRate) * 1000
|
||||
this.port.postMessage({ type: 'silence-end', time: currentTime })
|
||||
}
|
||||
if (this.state !== SPEAKING) {
|
||||
this.state = SPEAKING
|
||||
}
|
||||
}
|
||||
|
||||
if (this.blockCount % RMS_REPORT_INTERVAL === 0) {
|
||||
this.port.postMessage({ type: 'rms', value: dbfs })
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
registerProcessor('silence-detector', SilenceDetectorProcessor)
|
||||
Loading…
Add table
Add a link
Reference in a new issue