mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-23 11:31:29 +01:00
Compare commits
28 Commits
7540581052
...
f976e61de5
Author | SHA1 | Date | |
---|---|---|---|
|
f976e61de5 | ||
|
f919729538 | ||
|
7ea2787920 | ||
|
f7257588bd | ||
|
da252d9d32 | ||
|
e079ffbda6 | ||
|
2009cb27e1 | ||
|
f351440f1d | ||
|
f9d98509a8 | ||
|
37cd7660ea | ||
|
d867f99622 | ||
|
10fc719bc7 | ||
|
eb15fd5a32 | ||
|
7cecd299e4 | ||
|
52c0ffe40a | ||
|
637d62a3a9 | ||
|
f95a92b3d0 | ||
|
1d253b0a27 | ||
|
720b3dc453 | ||
|
d215fba7ed | ||
|
8388ec256f | ||
|
6365e92589 | ||
|
70c55cb08f | ||
|
c699bafc50 | ||
|
eb64ae7d5d | ||
|
c014fbcddc | ||
|
39d79c9b9c | ||
|
10eeaa6bfd |
12
CONTRIBUTORS
12
CONTRIBUTORS
|
@ -695,3 +695,15 @@ KBelmin
|
|||
kesor
|
||||
MellowKyler
|
||||
Wesley107772
|
||||
a13ssandr0
|
||||
ChocoLZS
|
||||
doe1080
|
||||
hugovdev
|
||||
jshumphrey
|
||||
julionc
|
||||
manavchaudhary1
|
||||
powergold1
|
||||
Sakura286
|
||||
SamDecrock
|
||||
stratus-ss
|
||||
subrat-lima
|
||||
|
|
58
Changelog.md
58
Changelog.md
|
@ -4,6 +4,64 @@ # Changelog
|
|||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||
-->
|
||||
|
||||
### 2024.11.18
|
||||
|
||||
#### Important changes
|
||||
- **Login with OAuth is no longer supported for YouTube**
|
||||
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
|
||||
|
||||
#### Core changes
|
||||
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
|
||||
- **utils**
|
||||
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
|
||||
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
|
||||
|
||||
#### Extractor changes
|
||||
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
|
||||
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
|
||||
- **chaturbate**
|
||||
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
|
||||
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
|
||||
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
|
||||
- **ctvnews**
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
|
||||
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
|
||||
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
|
||||
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
|
||||
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
|
||||
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
|
||||
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
|
||||
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
|
||||
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
|
||||
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
|
||||
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
|
||||
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **spreaker**
|
||||
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
|
||||
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
|
||||
- **youtube**
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Misc. changes
|
||||
- **build**
|
||||
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
|
||||
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
|
||||
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
|
||||
- **cleanup**
|
||||
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
|
||||
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
|
||||
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.04
|
||||
|
||||
#### Important changes
|
||||
|
|
11
README.md
11
README.md
|
@ -342,8 +342,9 @@ ## General Options:
|
|||
extractor plugins; postprocessor plugins can
|
||||
only be loaded from the default plugin
|
||||
directories
|
||||
--flat-playlist Do not extract the videos of a playlist,
|
||||
only list them
|
||||
--flat-playlist Do not extract a playlist's URL result
|
||||
entries; some entry metadata may be missing
|
||||
and downloading may be bypassed
|
||||
--no-flat-playlist Fully extract the videos of a playlist
|
||||
(default)
|
||||
--live-from-start Download livestreams from the start.
|
||||
|
@ -1768,7 +1769,7 @@ # EXTRACTOR ARGUMENTS
|
|||
#### youtube
|
||||
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
|
||||
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `mediaconnect`, `android_testsuite`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `ios,mweb` is used, and `web_creator,mediaconnect` is added as needed for age-gated videos when account age verification is required. Similarly, the `_music` variants are added for `music.youtube.com` URLs. Some clients, such as `web` and `android`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. You can use `all` to use all the clients, and `default` for the default clients. You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=all,-web`
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `mediaconnect`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `ios,mweb` is used, and `web_creator` is added as needed for age-gated videos when account age verification is required. Similarly, the `_music` variants are added for `music.youtube.com` URLs. Some clients, such as `web` and `android`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. You can use `all` to use all the clients, and `default` for the default clients. You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=all,-web`
|
||||
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
|
||||
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
|
||||
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
||||
|
@ -1866,8 +1867,8 @@ #### orfon (orf:on)
|
|||
#### bilibili
|
||||
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
||||
|
||||
#### digitalconcerthall
|
||||
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
|
||||
#### sonylivseries
|
||||
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
||||
|
||||
**Note**: These options may be changed/removed in the future without concern for backward compatibility
|
||||
|
||||
|
|
|
@ -234,5 +234,10 @@
|
|||
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
|
||||
"short": "[ie/vimeo] Fix API retries (#11351)",
|
||||
"authors": ["bashonly"]
|
||||
},
|
||||
{
|
||||
"action": "add",
|
||||
"when": "52c0ffe40ad6e8404d93296f575007b05b04c686",
|
||||
"short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)"
|
||||
}
|
||||
]
|
||||
|
|
|
@ -11,13 +11,12 @@
|
|||
import subprocess
|
||||
|
||||
from yt_dlp.aes import aes_encrypt, key_expansion
|
||||
from yt_dlp.utils import intlist_to_bytes
|
||||
|
||||
secret_msg = b'Secret message goes here'
|
||||
|
||||
|
||||
def hex_str(int_list):
|
||||
return codecs.encode(intlist_to_bytes(int_list), 'hex')
|
||||
return codecs.encode(bytes(int_list), 'hex')
|
||||
|
||||
|
||||
def openssl_encode(algo, key, iv):
|
||||
|
|
|
@ -313,6 +313,16 @@ banned-from = [
|
|||
"yt_dlp.compat.compat_urllib_parse_urlparse".msg = "Use `urllib.parse.urlparse` instead."
|
||||
"yt_dlp.compat.compat_shlex_quote".msg = "Use `yt_dlp.utils.shell_quote` instead."
|
||||
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
|
||||
"yt_dlp.utils.bytes_to_intlist".msg = "Use `list` instead."
|
||||
"yt_dlp.utils.intlist_to_bytes".msg = "Use `bytes` instead."
|
||||
"yt_dlp.utils.decodeArgument".msg = "Do not use"
|
||||
"yt_dlp.utils.decodeFilename".msg = "Do not use"
|
||||
"yt_dlp.utils.encodeFilename".msg = "Do not use"
|
||||
"yt_dlp.compat.compat_os_name".msg = "Use `os.name` instead."
|
||||
"yt_dlp.compat.compat_realpath".msg = "Use `os.path.realpath` instead."
|
||||
"yt_dlp.compat.functools".msg = "Use `functools` instead."
|
||||
"yt_dlp.utils.decodeOption".msg = "Do not use"
|
||||
"yt_dlp.utils.compiled_regex_type".msg = "Use `re.Pattern` instead."
|
||||
|
||||
[tool.autopep8]
|
||||
max_line_length = 120
|
||||
|
|
|
@ -129,6 +129,8 @@ # Supported sites
|
|||
- **Bandcamp:album**
|
||||
- **Bandcamp:user**
|
||||
- **Bandcamp:weekly**
|
||||
- **Bandlab**
|
||||
- **BandlabPlaylist**
|
||||
- **BannedVideo**
|
||||
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||
|
@ -484,6 +486,7 @@ # Supported sites
|
|||
- **Gab**
|
||||
- **GabTV**
|
||||
- **Gaia**: [*gaia*](## "netrc machine")
|
||||
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
|
||||
- **GameJolt**
|
||||
- **GameJoltCommunity**
|
||||
- **GameJoltGame**
|
||||
|
@ -651,6 +654,8 @@ # Supported sites
|
|||
- **Karaoketv**
|
||||
- **Katsomo**: (**Currently broken**)
|
||||
- **KelbyOne**: (**Currently broken**)
|
||||
- **Kenh14Playlist**
|
||||
- **Kenh14Video**
|
||||
- **Ketnet**
|
||||
- **khanacademy**
|
||||
- **khanacademy:unit**
|
||||
|
@ -784,10 +789,6 @@ # Supported sites
|
|||
- **MicrosoftLearnSession**
|
||||
- **MicrosoftMedius**
|
||||
- **microsoftstream**: Microsoft Stream
|
||||
- **mildom**: Record ongoing live by specific user in Mildom
|
||||
- **mildom:clip**: Clip in Mildom
|
||||
- **mildom:user:vod**: Download all VODs from specific user in Mildom
|
||||
- **mildom:vod**: VOD in Mildom
|
||||
- **minds**
|
||||
- **minds:channel**
|
||||
- **minds:group**
|
||||
|
@ -798,6 +799,7 @@ # Supported sites
|
|||
- **MiTele**: mitele.es
|
||||
- **mixch**
|
||||
- **mixch:archive**
|
||||
- **mixch:movie**
|
||||
- **mixcloud**
|
||||
- **mixcloud:playlist**
|
||||
- **mixcloud:user**
|
||||
|
@ -1060,8 +1062,8 @@ # Supported sites
|
|||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||
- **phoenix.de**
|
||||
- **Photobucket**
|
||||
- **PiaLive**
|
||||
- **Piapro**: [*piapro*](## "netrc machine")
|
||||
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
|
||||
- **Picarto**
|
||||
- **PicartoVod**
|
||||
- **Piksel**
|
||||
|
@ -1088,8 +1090,6 @@ # Supported sites
|
|||
- **PodbayFMChannel**
|
||||
- **Podchaser**
|
||||
- **podomatic**: (**Currently broken**)
|
||||
- **Pokemon**
|
||||
- **PokemonWatch**
|
||||
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||
- **PolsatGo**
|
||||
|
@ -1160,6 +1160,7 @@ # Supported sites
|
|||
- **RadioJavan**: (**Currently broken**)
|
||||
- **radiokapital**
|
||||
- **radiokapital:show**
|
||||
- **RadioRadicale**
|
||||
- **RadioZetPodcast**
|
||||
- **radlive**
|
||||
- **radlive:channel**
|
||||
|
@ -1367,9 +1368,7 @@ # Supported sites
|
|||
- **spotify**: Spotify episodes (**Currently broken**)
|
||||
- **spotify:show**: Spotify shows (**Currently broken**)
|
||||
- **Spreaker**
|
||||
- **SpreakerPage**
|
||||
- **SpreakerShow**
|
||||
- **SpreakerShowPage**
|
||||
- **SpringboardPlatform**
|
||||
- **Sprout**
|
||||
- **SproutVideo**
|
||||
|
@ -1570,6 +1569,8 @@ # Supported sites
|
|||
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||
- **ukcolumn**: (**Currently broken**)
|
||||
- **UKTVPlay**
|
||||
- **UlizaPlayer**
|
||||
- **UlizaPortal**: ulizaportal.jp
|
||||
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
||||
- **Unistra**
|
||||
- **Unity**: (**Currently broken**)
|
||||
|
@ -1587,8 +1588,6 @@ # Supported sites
|
|||
- **Varzesh3**: (**Currently broken**)
|
||||
- **Vbox7**
|
||||
- **Veo**
|
||||
- **Veoh**
|
||||
- **veoh:user**
|
||||
- **Vesti**: Вести.Ru (**Currently broken**)
|
||||
- **Vevo**
|
||||
- **VevoPlaylist**
|
||||
|
|
|
@ -9,7 +9,6 @@
|
|||
|
||||
import yt_dlp.extractor
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.compat import compat_os_name
|
||||
from yt_dlp.utils import preferredencoding, try_call, write_string, find_available_port
|
||||
|
||||
if 'pytest' in sys.modules:
|
||||
|
@ -49,7 +48,7 @@ def report_warning(message, *args, **kwargs):
|
|||
Print the message to stderr, it will be prefixed with 'WARNING:'
|
||||
If stderr is a tty file the 'WARNING:' will be colored
|
||||
"""
|
||||
if sys.stderr.isatty() and compat_os_name != 'nt':
|
||||
if sys.stderr.isatty() and os.name != 'nt':
|
||||
_msg_header = '\033[0;33mWARNING:\033[0m'
|
||||
else:
|
||||
_msg_header = 'WARNING:'
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
|
||||
from test.helper import FakeYDL, assertRegexpMatches, try_rm
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.compat import compat_os_name
|
||||
from yt_dlp.extractor import YoutubeIE
|
||||
from yt_dlp.extractor.common import InfoExtractor
|
||||
from yt_dlp.postprocessor.common import PostProcessor
|
||||
|
@ -839,8 +838,8 @@ def expect_same_infodict(out):
|
|||
test('%(filesize)#D', '1Ki')
|
||||
test('%(height)5.2D', ' 1.08k')
|
||||
test('%(title4)#S', 'foo_bar_test')
|
||||
test('%(title4).10S', ('foo "bar" ', 'foo "bar"' + ('#' if compat_os_name == 'nt' else ' ')))
|
||||
if compat_os_name == 'nt':
|
||||
test('%(title4).10S', ('foo "bar" ', 'foo "bar"' + ('#' if os.name == 'nt' else ' ')))
|
||||
if os.name == 'nt':
|
||||
test('%(title4)q', ('"foo ""bar"" test"', None))
|
||||
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', None))
|
||||
test('%(formats.0.id)#q', ('"id 1"', None))
|
||||
|
@ -903,9 +902,9 @@ def gen():
|
|||
|
||||
# Environment variable expansion for prepare_filename
|
||||
os.environ['__yt_dlp_var'] = 'expanded'
|
||||
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
|
||||
envvar = '%__yt_dlp_var%' if os.name == 'nt' else '$__yt_dlp_var'
|
||||
test(envvar, (envvar, 'expanded'))
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
test('%s%', ('%s%', '%s%'))
|
||||
os.environ['s'] = 'expanded'
|
||||
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
|
||||
|
|
|
@ -27,7 +27,6 @@
|
|||
pad_block,
|
||||
)
|
||||
from yt_dlp.dependencies import Cryptodome
|
||||
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
||||
|
||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
||||
|
||||
|
@ -40,33 +39,33 @@ def setUp(self):
|
|||
def test_encrypt(self):
|
||||
msg = b'message'
|
||||
key = list(range(16))
|
||||
encrypted = aes_encrypt(bytes_to_intlist(msg), key)
|
||||
decrypted = intlist_to_bytes(aes_decrypt(encrypted, key))
|
||||
encrypted = aes_encrypt(list(msg), key)
|
||||
decrypted = bytes(aes_decrypt(encrypted, key))
|
||||
self.assertEqual(decrypted, msg)
|
||||
|
||||
def test_cbc_decrypt(self):
|
||||
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
||||
decrypted = bytes(aes_cbc_decrypt(list(data), self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
if Cryptodome.AES:
|
||||
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
||||
decrypted = aes_cbc_decrypt_bytes(data, bytes(self.key), bytes(self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_cbc_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd')
|
||||
|
||||
def test_ctr_decrypt(self):
|
||||
data = bytes_to_intlist(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
decrypted = intlist_to_bytes(aes_ctr_decrypt(data, self.key, self.iv))
|
||||
data = list(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
decrypted = bytes(aes_ctr_decrypt(data, self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_ctr_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_ctr_encrypt(data, self.key, self.iv))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_ctr_encrypt(data, self.key, self.iv))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
|
@ -75,19 +74,19 @@ def test_gcm_decrypt(self):
|
|||
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f.\x08\xb4T\xe4/\x17\xbd'
|
||||
authentication_tag = b'\xe8&I\x80rI\x07\x9d}YWuU@:e'
|
||||
|
||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
||||
decrypted = bytes(aes_gcm_decrypt_and_verify(
|
||||
list(data), self.key, list(authentication_tag), self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
if Cryptodome.AES:
|
||||
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
||||
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
||||
data, bytes(self.key), authentication_tag, bytes(self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_gcm_aligned_decrypt(self):
|
||||
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f'
|
||||
authentication_tag = b'\x08\xb1\x9d!&\x98\xd0\xeaRq\x90\xe6;\xb5]\xd8'
|
||||
|
||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||
decrypted = bytes(aes_gcm_decrypt_and_verify(
|
||||
list(data), self.key, list(authentication_tag), self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg[:16])
|
||||
if Cryptodome.AES:
|
||||
|
@ -96,38 +95,38 @@ def test_gcm_aligned_decrypt(self):
|
|||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg[:16])
|
||||
|
||||
def test_decrypt_text(self):
|
||||
password = intlist_to_bytes(self.key).decode()
|
||||
password = bytes(self.key).decode()
|
||||
encrypted = base64.b64encode(
|
||||
intlist_to_bytes(self.iv[:8])
|
||||
bytes(self.iv[:8])
|
||||
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae',
|
||||
).decode()
|
||||
decrypted = (aes_decrypt_text(encrypted, password, 16))
|
||||
self.assertEqual(decrypted, self.secret_msg)
|
||||
|
||||
password = intlist_to_bytes(self.key).decode()
|
||||
password = bytes(self.key).decode()
|
||||
encrypted = base64.b64encode(
|
||||
intlist_to_bytes(self.iv[:8])
|
||||
bytes(self.iv[:8])
|
||||
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83',
|
||||
).decode()
|
||||
decrypted = (aes_decrypt_text(encrypted, password, 32))
|
||||
self.assertEqual(decrypted, self.secret_msg)
|
||||
|
||||
def test_ecb_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_ecb_encrypt(data, self.key))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
|
||||
def test_ecb_decrypt(self):
|
||||
data = bytes_to_intlist(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
decrypted = intlist_to_bytes(aes_ecb_decrypt(data, self.key, self.iv))
|
||||
data = list(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
decrypted = bytes(aes_ecb_decrypt(data, self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_key_expansion(self):
|
||||
key = '4f6bdaa39e2f8cb07f5e722d9edef314'
|
||||
|
||||
self.assertEqual(key_expansion(bytes_to_intlist(bytearray.fromhex(key))), [
|
||||
self.assertEqual(key_expansion(list(bytearray.fromhex(key))), [
|
||||
0x4F, 0x6B, 0xDA, 0xA3, 0x9E, 0x2F, 0x8C, 0xB0, 0x7F, 0x5E, 0x72, 0x2D, 0x9E, 0xDE, 0xF3, 0x14,
|
||||
0x53, 0x66, 0x20, 0xA8, 0xCD, 0x49, 0xAC, 0x18, 0xB2, 0x17, 0xDE, 0x35, 0x2C, 0xC9, 0x2D, 0x21,
|
||||
0x8C, 0xBE, 0xDD, 0xD9, 0x41, 0xF7, 0x71, 0xC1, 0xF3, 0xE0, 0xAF, 0xF4, 0xDF, 0x29, 0x82, 0xD5,
|
||||
|
|
|
@ -12,12 +12,7 @@
|
|||
|
||||
from yt_dlp import compat
|
||||
from yt_dlp.compat import urllib # isort: split
|
||||
from yt_dlp.compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_expanduser,
|
||||
compat_urllib_parse_unquote, # noqa: TID251
|
||||
compat_urllib_parse_urlencode, # noqa: TID251
|
||||
)
|
||||
from yt_dlp.compat import compat_etree_fromstring, compat_expanduser
|
||||
from yt_dlp.compat.urllib.request import getproxies
|
||||
|
||||
|
||||
|
@ -43,39 +38,6 @@ def test_compat_expanduser(self):
|
|||
finally:
|
||||
os.environ['HOME'] = old_home or ''
|
||||
|
||||
def test_compat_urllib_parse_unquote(self):
|
||||
self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%7e/abc+def'), '~/abc+def')
|
||||
self.assertEqual(compat_urllib_parse_unquote(''), '')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%'), '%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%%'), '%%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%%%'), '%%%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%2F'), '/')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%2f'), '/')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%E6%B4%A5%E6%B3%A2'), '津波')
|
||||
self.assertEqual(
|
||||
compat_urllib_parse_unquote('''<meta property="og:description" content="%E2%96%81%E2%96%82%E2%96%83%E2%96%84%25%E2%96%85%E2%96%86%E2%96%87%E2%96%88" />
|
||||
%<a href="https://ar.wikipedia.org/wiki/%D8%AA%D8%B3%D9%88%D9%86%D8%A7%D9%85%D9%8A">%a'''),
|
||||
'''<meta property="og:description" content="▁▂▃▄%▅▆▇█" />
|
||||
%<a href="https://ar.wikipedia.org/wiki/تسونامي">%a''')
|
||||
self.assertEqual(
|
||||
compat_urllib_parse_unquote('''%28%5E%E2%97%A3_%E2%97%A2%5E%29%E3%81%A3%EF%B8%BB%E3%83%87%E2%95%90%E4%B8%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%86%B6%I%Break%25Things%'''),
|
||||
'''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''')
|
||||
|
||||
def test_compat_urllib_parse_unquote_plus(self):
|
||||
self.assertEqual(urllib.parse.unquote_plus('abc%20def'), 'abc def')
|
||||
self.assertEqual(urllib.parse.unquote_plus('%7e/abc+def'), '~/abc def')
|
||||
|
||||
def test_compat_urllib_parse_urlencode(self):
|
||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': b'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': 'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': b'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', 'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', b'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', 'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', b'def')]), 'abc=def')
|
||||
|
||||
def test_compat_etree_fromstring(self):
|
||||
xml = '''
|
||||
<root foo="bar" spam="中文">
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
from test.helper import http_server_port, try_rm
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.downloader.http import HttpFD
|
||||
from yt_dlp.utils import encodeFilename
|
||||
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
|
||||
|
||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
@ -82,12 +81,12 @@ def download(self, params, ep):
|
|||
ydl = YoutubeDL(params)
|
||||
downloader = HttpFD(ydl, params)
|
||||
filename = 'testfile.mp4'
|
||||
try_rm(encodeFilename(filename))
|
||||
try_rm(filename)
|
||||
self.assertTrue(downloader.real_download(filename, {
|
||||
'url': f'http://127.0.0.1:{self.port}/{ep}',
|
||||
}), ep)
|
||||
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep)
|
||||
try_rm(encodeFilename(filename))
|
||||
self.assertEqual(os.path.getsize(filename), TEST_SIZE, ep)
|
||||
try_rm(filename)
|
||||
|
||||
def download_all(self, params):
|
||||
for ep in ('regular', 'no-content-length', 'no-range', 'no-range-no-content-length'):
|
||||
|
|
|
@ -481,7 +481,7 @@ def test_subs_list_to_dict(self):
|
|||
'id': 'name',
|
||||
'data': 'content',
|
||||
'url': 'url',
|
||||
}, all, {subs_list_to_dict}]) == {
|
||||
}, all, {subs_list_to_dict(lang=None)}]) == {
|
||||
'de': [{'url': 'https://example.com/subs/de.ass'}],
|
||||
'en': [{'data': 'content'}],
|
||||
}, 'subs with mandatory items missing should be filtered'
|
||||
|
@ -507,6 +507,54 @@ def test_subs_list_to_dict(self):
|
|||
{'url': 'https://example.com/subs/en1', 'ext': 'ext'},
|
||||
{'url': 'https://example.com/subs/en2', 'ext': 'ext'},
|
||||
]}, '`quality` key should sort subtitle list accordingly'
|
||||
assert traverse_obj([
|
||||
{'name': 'de', 'url': 'https://example.com/subs/de.ass'},
|
||||
{'name': 'de'},
|
||||
{'name': 'en', 'content': 'content'},
|
||||
{'url': 'https://example.com/subs/en'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'data': 'content',
|
||||
}, all, {subs_list_to_dict(lang='en')}]) == {
|
||||
'de': [{'url': 'https://example.com/subs/de.ass'}],
|
||||
'en': [
|
||||
{'data': 'content'},
|
||||
{'url': 'https://example.com/subs/en'},
|
||||
],
|
||||
}, 'optionally provided lang should be used if no id available'
|
||||
assert traverse_obj([
|
||||
{'name': 1, 'url': 'https://example.com/subs/de1'},
|
||||
{'name': {}, 'url': 'https://example.com/subs/de2'},
|
||||
{'name': 'de', 'ext': 1, 'url': 'https://example.com/subs/de3'},
|
||||
{'name': 'de', 'ext': {}, 'url': 'https://example.com/subs/de4'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'ext': 'ext',
|
||||
}, all, {subs_list_to_dict(lang=None)}]) == {
|
||||
'de': [
|
||||
{'url': 'https://example.com/subs/de3'},
|
||||
{'url': 'https://example.com/subs/de4'},
|
||||
],
|
||||
}, 'non str types should be ignored for id and ext'
|
||||
assert traverse_obj([
|
||||
{'name': 1, 'url': 'https://example.com/subs/de1'},
|
||||
{'name': {}, 'url': 'https://example.com/subs/de2'},
|
||||
{'name': 'de', 'ext': 1, 'url': 'https://example.com/subs/de3'},
|
||||
{'name': 'de', 'ext': {}, 'url': 'https://example.com/subs/de4'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'ext': 'ext',
|
||||
}, all, {subs_list_to_dict(lang='de')}]) == {
|
||||
'de': [
|
||||
{'url': 'https://example.com/subs/de1'},
|
||||
{'url': 'https://example.com/subs/de2'},
|
||||
{'url': 'https://example.com/subs/de3'},
|
||||
{'url': 'https://example.com/subs/de4'},
|
||||
],
|
||||
}, 'non str types should be replaced by default id'
|
||||
|
||||
def test_trim_str(self):
|
||||
with pytest.raises(TypeError):
|
||||
|
@ -525,7 +573,7 @@ def test_trim_str(self):
|
|||
def test_unpack(self):
|
||||
assert unpack(lambda *x: ''.join(map(str, x)))([1, 2, 3]) == '123'
|
||||
assert unpack(join_nonempty)([1, 2, 3]) == '1-2-3'
|
||||
assert unpack(join_nonempty(delim=' '))([1, 2, 3]) == '1 2 3'
|
||||
assert unpack(join_nonempty, delim=' ')([1, 2, 3]) == '1 2 3'
|
||||
with pytest.raises(TypeError):
|
||||
unpack(join_nonempty)()
|
||||
with pytest.raises(TypeError):
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
from yt_dlp.compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_HTMLParseError,
|
||||
compat_os_name,
|
||||
)
|
||||
from yt_dlp.utils import (
|
||||
Config,
|
||||
|
@ -49,7 +48,6 @@
|
|||
dfxp2srt,
|
||||
encode_base_n,
|
||||
encode_compat_str,
|
||||
encodeFilename,
|
||||
expand_path,
|
||||
extract_attributes,
|
||||
extract_basic_auth,
|
||||
|
@ -69,10 +67,8 @@
|
|||
get_elements_html_by_class,
|
||||
get_elements_text_and_html_by_attribute,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
iri_to_uri,
|
||||
is_html,
|
||||
join_nonempty,
|
||||
js_to_json,
|
||||
limit_length,
|
||||
locked_file,
|
||||
|
@ -567,10 +563,10 @@ def test_smuggle_url(self):
|
|||
self.assertEqual(res_data, {'a': 'b', 'c': 'd'})
|
||||
|
||||
def test_shell_quote(self):
|
||||
args = ['ffmpeg', '-i', encodeFilename('ñ€ß\'.mp4')]
|
||||
args = ['ffmpeg', '-i', 'ñ€ß\'.mp4']
|
||||
self.assertEqual(
|
||||
shell_quote(args),
|
||||
"""ffmpeg -i 'ñ€ß'"'"'.mp4'""" if compat_os_name != 'nt' else '''ffmpeg -i "ñ€ß'.mp4"''')
|
||||
"""ffmpeg -i 'ñ€ß'"'"'.mp4'""" if os.name != 'nt' else '''ffmpeg -i "ñ€ß'.mp4"''')
|
||||
|
||||
def test_float_or_none(self):
|
||||
self.assertEqual(float_or_none('42.42'), 42.42)
|
||||
|
@ -1310,15 +1306,10 @@ def test_clean_html(self):
|
|||
self.assertEqual(clean_html('a:\n "b"'), 'a: "b"')
|
||||
self.assertEqual(clean_html('a<br>\xa0b'), 'a\nb')
|
||||
|
||||
def test_intlist_to_bytes(self):
|
||||
self.assertEqual(
|
||||
intlist_to_bytes([0, 1, 127, 128, 255]),
|
||||
b'\x00\x01\x7f\x80\xff')
|
||||
|
||||
def test_args_to_str(self):
|
||||
self.assertEqual(
|
||||
args_to_str(['foo', 'ba/r', '-baz', '2 be', '']),
|
||||
'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""',
|
||||
'foo ba/r -baz \'2 be\' \'\'' if os.name != 'nt' else 'foo ba/r -baz "2 be" ""',
|
||||
)
|
||||
|
||||
def test_parse_filesize(self):
|
||||
|
@ -2118,7 +2109,7 @@ def test_extract_basic_auth(self):
|
|||
assert extract_basic_auth('http://user:@foo.bar') == ('http://foo.bar', 'Basic dXNlcjo=')
|
||||
assert extract_basic_auth('http://user:pass@foo.bar') == ('http://foo.bar', 'Basic dXNlcjpwYXNz')
|
||||
|
||||
@unittest.skipUnless(compat_os_name == 'nt', 'Only relevant on Windows')
|
||||
@unittest.skipUnless(os.name == 'nt', 'Only relevant on Windows')
|
||||
def test_windows_escaping(self):
|
||||
tests = [
|
||||
'test"&',
|
||||
|
@ -2158,10 +2149,6 @@ def test_partial_application(self):
|
|||
assert int_or_none(v=10) == 10, 'keyword passed positional should call function'
|
||||
assert int_or_none(scale=0.1)(10) == 100, 'call after partial application should call the function'
|
||||
|
||||
assert callable(join_nonempty(delim=', ')), 'varargs positional should apply partially'
|
||||
assert callable(join_nonempty()), 'varargs positional should apply partially'
|
||||
assert join_nonempty(None, delim=', ') == '', 'passed varargs should call the function'
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
|
||||
from .cache import Cache
|
||||
from .compat import urllib # isort: split
|
||||
from .compat import compat_os_name, urllib_req_to_req
|
||||
from .compat import urllib_req_to_req
|
||||
from .cookies import CookieLoadError, LenientSimpleCookie, load_cookies
|
||||
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name
|
||||
from .downloader.rtmp import rtmpdump_version
|
||||
|
@ -109,7 +109,6 @@
|
|||
determine_ext,
|
||||
determine_protocol,
|
||||
encode_compat_str,
|
||||
encodeFilename,
|
||||
escapeHTML,
|
||||
expand_path,
|
||||
extract_basic_auth,
|
||||
|
@ -167,7 +166,7 @@
|
|||
)
|
||||
from .version import CHANNEL, ORIGIN, RELEASE_GIT_HEAD, VARIANT, __version__
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
import ctypes
|
||||
|
||||
|
||||
|
@ -643,7 +642,7 @@ def __init__(self, params=None, auto_init=True):
|
|||
out=stdout,
|
||||
error=sys.stderr,
|
||||
screen=sys.stderr if self.params.get('quiet') else stdout,
|
||||
console=None if compat_os_name == 'nt' else next(
|
||||
console=None if os.name == 'nt' else next(
|
||||
filter(supports_terminal_sequences, (sys.stderr, sys.stdout)), None),
|
||||
)
|
||||
|
||||
|
@ -952,7 +951,7 @@ def to_stderr(self, message, only_once=False):
|
|||
self._write_string(f'{self._bidi_workaround(message)}\n', self._out_files.error, only_once=only_once)
|
||||
|
||||
def _send_console_code(self, code):
|
||||
if compat_os_name == 'nt' or not self._out_files.console:
|
||||
if os.name == 'nt' or not self._out_files.console:
|
||||
return
|
||||
self._write_string(code, self._out_files.console)
|
||||
|
||||
|
@ -960,7 +959,7 @@ def to_console_title(self, message):
|
|||
if not self.params.get('consoletitle', False):
|
||||
return
|
||||
message = remove_terminal_sequences(message)
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
if ctypes.windll.kernel32.GetConsoleWindow():
|
||||
# c_wchar_p() might not be necessary if `message` is
|
||||
# already of type unicode()
|
||||
|
@ -3255,9 +3254,9 @@ def check_max_downloads():
|
|||
|
||||
if full_filename is None:
|
||||
return
|
||||
if not self._ensure_dir_exists(encodeFilename(full_filename)):
|
||||
if not self._ensure_dir_exists(full_filename):
|
||||
return
|
||||
if not self._ensure_dir_exists(encodeFilename(temp_filename)):
|
||||
if not self._ensure_dir_exists(temp_filename):
|
||||
return
|
||||
|
||||
if self._write_description('video', info_dict,
|
||||
|
@ -3289,16 +3288,16 @@ def check_max_downloads():
|
|||
if self.params.get('writeannotations', False):
|
||||
annofn = self.prepare_filename(info_dict, 'annotation')
|
||||
if annofn:
|
||||
if not self._ensure_dir_exists(encodeFilename(annofn)):
|
||||
if not self._ensure_dir_exists(annofn):
|
||||
return
|
||||
if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(annofn)):
|
||||
if not self.params.get('overwrites', True) and os.path.exists(annofn):
|
||||
self.to_screen('[info] Video annotations are already present')
|
||||
elif not info_dict.get('annotations'):
|
||||
self.report_warning('There are no annotations to write.')
|
||||
else:
|
||||
try:
|
||||
self.to_screen('[info] Writing video annotations to: ' + annofn)
|
||||
with open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile:
|
||||
with open(annofn, 'w', encoding='utf-8') as annofile:
|
||||
annofile.write(info_dict['annotations'])
|
||||
except (KeyError, TypeError):
|
||||
self.report_warning('There are no annotations to write.')
|
||||
|
@ -3314,14 +3313,14 @@ def _write_link_file(link_type):
|
|||
f'Cannot write internet shortcut file because the actual URL of "{info_dict["webpage_url"]}" is unknown')
|
||||
return True
|
||||
linkfn = replace_extension(self.prepare_filename(info_dict, 'link'), link_type, info_dict.get('ext'))
|
||||
if not self._ensure_dir_exists(encodeFilename(linkfn)):
|
||||
if not self._ensure_dir_exists(linkfn):
|
||||
return False
|
||||
if self.params.get('overwrites', True) and os.path.exists(encodeFilename(linkfn)):
|
||||
if self.params.get('overwrites', True) and os.path.exists(linkfn):
|
||||
self.to_screen(f'[info] Internet shortcut (.{link_type}) is already present')
|
||||
return True
|
||||
try:
|
||||
self.to_screen(f'[info] Writing internet shortcut (.{link_type}) to: {linkfn}')
|
||||
with open(encodeFilename(to_high_limit_path(linkfn)), 'w', encoding='utf-8',
|
||||
with open(to_high_limit_path(linkfn), 'w', encoding='utf-8',
|
||||
newline='\r\n' if link_type == 'url' else '\n') as linkfile:
|
||||
template_vars = {'url': url}
|
||||
if link_type == 'desktop':
|
||||
|
@ -3352,7 +3351,7 @@ def _write_link_file(link_type):
|
|||
|
||||
if self.params.get('skip_download'):
|
||||
info_dict['filepath'] = temp_filename
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(encodeFilename(full_filename)))
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(full_filename))
|
||||
info_dict['__files_to_move'] = files_to_move
|
||||
replace_info_dict(self.run_pp(MoveFilesAfterDownloadPP(self, False), info_dict))
|
||||
info_dict['__write_download_archive'] = self.params.get('force_write_download_archive')
|
||||
|
@ -3482,7 +3481,7 @@ def correct_ext(filename, ext=new_ext):
|
|||
self.report_file_already_downloaded(dl_filename)
|
||||
|
||||
dl_filename = dl_filename or temp_filename
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(encodeFilename(full_filename)))
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(full_filename))
|
||||
|
||||
except network_exceptions as err:
|
||||
self.report_error(f'unable to download video data: {err}')
|
||||
|
@ -4297,7 +4296,7 @@ def _write_description(self, label, ie_result, descfn):
|
|||
else:
|
||||
try:
|
||||
self.to_screen(f'[info] Writing {label} description to: {descfn}')
|
||||
with open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
|
||||
with open(descfn, 'w', encoding='utf-8') as descfile:
|
||||
descfile.write(ie_result['description'])
|
||||
except OSError:
|
||||
self.report_error(f'Cannot write {label} description file {descfn}')
|
||||
|
@ -4381,7 +4380,9 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None
|
|||
return None
|
||||
|
||||
for idx, t in list(enumerate(thumbnails))[::-1]:
|
||||
thumb_ext = (f'{t["id"]}.' if multiple else '') + determine_ext(t['url'], 'jpg')
|
||||
thumb_ext = t.get('ext') or determine_ext(t['url'], 'jpg')
|
||||
if multiple:
|
||||
thumb_ext = f'{t["id"]}.{thumb_ext}'
|
||||
thumb_display_id = f'{label} thumbnail {t["id"]}'
|
||||
thumb_filename = replace_extension(filename, thumb_ext, info_dict.get('ext'))
|
||||
thumb_filename_final = replace_extension(thumb_filename_base, thumb_ext, info_dict.get('ext'))
|
||||
|
@ -4397,7 +4398,7 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None
|
|||
try:
|
||||
uf = self.urlopen(Request(t['url'], headers=t.get('http_headers', {})))
|
||||
self.to_screen(f'[info] Writing {thumb_display_id} to: {thumb_filename}')
|
||||
with open(encodeFilename(thumb_filename), 'wb') as thumbf:
|
||||
with open(thumb_filename, 'wb') as thumbf:
|
||||
shutil.copyfileobj(uf, thumbf)
|
||||
ret.append((thumb_filename, thumb_filename_final))
|
||||
t['filepath'] = thumb_filename
|
||||
|
|
|
@ -14,7 +14,6 @@
|
|||
import re
|
||||
import traceback
|
||||
|
||||
from .compat import compat_os_name
|
||||
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS, CookieLoadError
|
||||
from .downloader.external import get_external_downloader
|
||||
from .extractor import list_extractor_classes
|
||||
|
@ -44,7 +43,6 @@
|
|||
GeoUtils,
|
||||
PlaylistEntries,
|
||||
SameFileError,
|
||||
decodeOption,
|
||||
download_range_func,
|
||||
expand_path,
|
||||
float_or_none,
|
||||
|
@ -883,8 +881,8 @@ def parse_options(argv=None):
|
|||
'listsubtitles': opts.listsubtitles,
|
||||
'subtitlesformat': opts.subtitlesformat,
|
||||
'subtitleslangs': opts.subtitleslangs,
|
||||
'matchtitle': decodeOption(opts.matchtitle),
|
||||
'rejecttitle': decodeOption(opts.rejecttitle),
|
||||
'matchtitle': opts.matchtitle,
|
||||
'rejecttitle': opts.rejecttitle,
|
||||
'max_downloads': opts.max_downloads,
|
||||
'prefer_free_formats': opts.prefer_free_formats,
|
||||
'trim_file_name': opts.trim_file_name,
|
||||
|
@ -1053,7 +1051,7 @@ def make_row(target, handler):
|
|||
ydl.warn_if_short_id(args)
|
||||
|
||||
# Show a useful error message and wait for keypress if not launched from shell on Windows
|
||||
if not args and compat_os_name == 'nt' and getattr(sys, 'frozen', False):
|
||||
if not args and os.name == 'nt' and getattr(sys, 'frozen', False):
|
||||
import ctypes.wintypes
|
||||
import msvcrt
|
||||
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
|
||||
from .compat import compat_ord
|
||||
from .dependencies import Cryptodome
|
||||
from .utils import bytes_to_intlist, intlist_to_bytes
|
||||
|
||||
if Cryptodome.AES:
|
||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||
|
@ -17,15 +16,15 @@ def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
|||
else:
|
||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||
""" Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """
|
||||
return intlist_to_bytes(aes_cbc_decrypt(*map(bytes_to_intlist, (data, key, iv))))
|
||||
return bytes(aes_cbc_decrypt(*map(list, (data, key, iv))))
|
||||
|
||||
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
||||
""" Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """
|
||||
return intlist_to_bytes(aes_gcm_decrypt_and_verify(*map(bytes_to_intlist, (data, key, tag, nonce))))
|
||||
return bytes(aes_gcm_decrypt_and_verify(*map(list, (data, key, tag, nonce))))
|
||||
|
||||
|
||||
def aes_cbc_encrypt_bytes(data, key, iv, **kwargs):
|
||||
return intlist_to_bytes(aes_cbc_encrypt(*map(bytes_to_intlist, (data, key, iv)), **kwargs))
|
||||
return bytes(aes_cbc_encrypt(*map(list, (data, key, iv)), **kwargs))
|
||||
|
||||
|
||||
BLOCK_SIZE_BYTES = 16
|
||||
|
@ -221,7 +220,7 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
|
|||
j0 = [*nonce, 0, 0, 0, 1]
|
||||
else:
|
||||
fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8
|
||||
ghash_in = nonce + [0] * fill + bytes_to_intlist((8 * len(nonce)).to_bytes(8, 'big'))
|
||||
ghash_in = nonce + [0] * fill + list((8 * len(nonce)).to_bytes(8, 'big'))
|
||||
j0 = ghash(hash_subkey, ghash_in)
|
||||
|
||||
# TODO: add nonce support to aes_ctr_decrypt
|
||||
|
@ -234,9 +233,9 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
|
|||
s_tag = ghash(
|
||||
hash_subkey,
|
||||
data
|
||||
+ [0] * pad_len # pad
|
||||
+ bytes_to_intlist((0 * 8).to_bytes(8, 'big') # length of associated data
|
||||
+ ((len(data) * 8).to_bytes(8, 'big'))), # length of data
|
||||
+ [0] * pad_len # pad
|
||||
+ list((0 * 8).to_bytes(8, 'big') # length of associated data
|
||||
+ ((len(data) * 8).to_bytes(8, 'big'))), # length of data
|
||||
)
|
||||
|
||||
if tag != aes_ctr_encrypt(s_tag, key, j0):
|
||||
|
@ -300,8 +299,8 @@ def aes_decrypt_text(data, password, key_size_bytes):
|
|||
"""
|
||||
NONCE_LENGTH_BYTES = 8
|
||||
|
||||
data = bytes_to_intlist(base64.b64decode(data))
|
||||
password = bytes_to_intlist(password.encode())
|
||||
data = list(base64.b64decode(data))
|
||||
password = list(password.encode())
|
||||
|
||||
key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))
|
||||
key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES)
|
||||
|
@ -310,7 +309,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
|
|||
cipher = data[NONCE_LENGTH_BYTES:]
|
||||
|
||||
decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES))
|
||||
return intlist_to_bytes(decrypted_data)
|
||||
return bytes(decrypted_data)
|
||||
|
||||
|
||||
RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36)
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import os
|
||||
import sys
|
||||
import xml.etree.ElementTree as etree
|
||||
|
||||
from .compat_utils import passthrough_module
|
||||
|
@ -24,33 +23,14 @@ def compat_etree_fromstring(text):
|
|||
return etree.XML(text, parser=etree.XMLParser(target=_TreeBuilder()))
|
||||
|
||||
|
||||
compat_os_name = os._name if os.name == 'java' else os.name
|
||||
|
||||
|
||||
def compat_shlex_quote(s):
|
||||
from ..utils import shell_quote
|
||||
return shell_quote(s)
|
||||
|
||||
|
||||
def compat_ord(c):
|
||||
return c if isinstance(c, int) else ord(c)
|
||||
|
||||
|
||||
if compat_os_name == 'nt' and sys.version_info < (3, 8):
|
||||
# os.path.realpath on Windows does not follow symbolic links
|
||||
# prior to Python 3.8 (see https://bugs.python.org/issue9949)
|
||||
def compat_realpath(path):
|
||||
while os.path.islink(path):
|
||||
path = os.path.abspath(os.readlink(path))
|
||||
return os.path.realpath(path)
|
||||
else:
|
||||
compat_realpath = os.path.realpath
|
||||
|
||||
|
||||
# Python 3.8+ does not honor %HOME% on windows, but this breaks compatibility with youtube-dl
|
||||
# See https://github.com/yt-dlp/yt-dlp/issues/792
|
||||
# https://docs.python.org/3/library/os.path.html#os.path.expanduser
|
||||
if compat_os_name in ('nt', 'ce'):
|
||||
if os.name in ('nt', 'ce'):
|
||||
def compat_expanduser(path):
|
||||
HOME = os.environ.get('HOME')
|
||||
if not HOME:
|
||||
|
|
|
@ -8,16 +8,14 @@
|
|||
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=6))
|
||||
del passthrough_module
|
||||
|
||||
import base64
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import functools # noqa: F401
|
||||
import os
|
||||
|
||||
compat_str = str
|
||||
|
||||
compat_b64decode = base64.b64decode
|
||||
compat_os_name = os.name
|
||||
compat_realpath = os.path.realpath
|
||||
|
||||
compat_urlparse = urllib.parse
|
||||
compat_parse_qs = urllib.parse.parse_qs
|
||||
compat_urllib_parse_unquote = urllib.parse.unquote
|
||||
compat_urllib_parse_urlencode = urllib.parse.urlencode
|
||||
compat_urllib_parse_urlparse = urllib.parse.urlparse
|
||||
|
||||
def compat_shlex_quote(s):
|
||||
from ..utils import shell_quote
|
||||
return shell_quote(s)
|
||||
|
|
|
@ -30,7 +30,7 @@
|
|||
from re import Pattern as compat_Pattern # noqa: F401
|
||||
from re import match as compat_Match # noqa: F401
|
||||
|
||||
from . import compat_expanduser, compat_HTMLParseError, compat_realpath
|
||||
from . import compat_expanduser, compat_HTMLParseError
|
||||
from .compat_utils import passthrough_module
|
||||
from ..dependencies import brotli as compat_brotli # noqa: F401
|
||||
from ..dependencies import websockets as compat_websockets # noqa: F401
|
||||
|
@ -78,7 +78,7 @@ def compat_setenv(key, value, env=os.environ):
|
|||
compat_map = map
|
||||
compat_numeric_types = (int, float, complex)
|
||||
compat_os_path_expanduser = compat_expanduser
|
||||
compat_os_path_realpath = compat_realpath
|
||||
compat_os_path_realpath = os.path.realpath
|
||||
compat_print = print
|
||||
compat_shlex_split = shlex.split
|
||||
compat_socket_create_connection = socket.create_connection
|
||||
|
@ -104,5 +104,12 @@ def compat_setenv(key, value, env=os.environ):
|
|||
compat_xpath = lambda xpath: xpath
|
||||
compat_zip = zip
|
||||
workaround_optparse_bug9161 = lambda: None
|
||||
compat_str = str
|
||||
compat_b64decode = base64.b64decode
|
||||
compat_urlparse = urllib.parse
|
||||
compat_parse_qs = urllib.parse.parse_qs
|
||||
compat_urllib_parse_unquote = urllib.parse.unquote
|
||||
compat_urllib_parse_urlencode = urllib.parse.urlencode
|
||||
compat_urllib_parse_urlparse = urllib.parse.urlparse
|
||||
|
||||
legacy = []
|
||||
|
|
|
@ -1,7 +0,0 @@
|
|||
# flake8: noqa: F405
|
||||
from functools import * # noqa: F403
|
||||
|
||||
from .compat_utils import passthrough_module
|
||||
|
||||
passthrough_module(__name__, 'functools')
|
||||
del passthrough_module
|
|
@ -7,9 +7,9 @@
|
|||
del passthrough_module
|
||||
|
||||
|
||||
from .. import compat_os_name
|
||||
import os
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
# On older Python versions, proxies are extracted from Windows registry erroneously. [1]
|
||||
# If the https proxy in the registry does not have a scheme, urllib will incorrectly add https:// to it. [2]
|
||||
# It is unlikely that the user has actually set it to be https, so we should be fine to safely downgrade
|
||||
|
@ -37,4 +37,4 @@ def getproxies_registry_patched():
|
|||
def getproxies():
|
||||
return getproxies_environment() or getproxies_registry_patched()
|
||||
|
||||
del compat_os_name
|
||||
del os
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
aes_gcm_decrypt_and_verify_bytes,
|
||||
unpad_pkcs7,
|
||||
)
|
||||
from .compat import compat_os_name
|
||||
from .dependencies import (
|
||||
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
||||
secretstorage,
|
||||
|
@ -343,7 +342,7 @@ def _extract_chrome_cookies(browser_name, profile, keyring, logger):
|
|||
logger.debug(f'cookie version breakdown: {counts}')
|
||||
return jar
|
||||
except PermissionError as error:
|
||||
if compat_os_name == 'nt' and error.errno == 13:
|
||||
if os.name == 'nt' and error.errno == 13:
|
||||
message = 'Could not copy Chrome cookie database. See https://github.com/yt-dlp/yt-dlp/issues/7271 for more info'
|
||||
logger.error(message)
|
||||
raise DownloadError(message) # force exit
|
||||
|
|
|
@ -20,9 +20,7 @@
|
|||
Namespace,
|
||||
RetryManager,
|
||||
classproperty,
|
||||
decodeArgument,
|
||||
deprecation_warning,
|
||||
encodeFilename,
|
||||
format_bytes,
|
||||
join_nonempty,
|
||||
parse_bytes,
|
||||
|
@ -219,7 +217,7 @@ def slow_down(self, start_time, now, byte_counter):
|
|||
def temp_name(self, filename):
|
||||
"""Returns a temporary filename for the given filename."""
|
||||
if self.params.get('nopart', False) or filename == '-' or \
|
||||
(os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
|
||||
(os.path.exists(filename) and not os.path.isfile(filename)):
|
||||
return filename
|
||||
return filename + '.part'
|
||||
|
||||
|
@ -273,7 +271,7 @@ def try_utime(self, filename, last_modified_hdr):
|
|||
"""Try to set the last-modified time of the given file."""
|
||||
if last_modified_hdr is None:
|
||||
return
|
||||
if not os.path.isfile(encodeFilename(filename)):
|
||||
if not os.path.isfile(filename):
|
||||
return
|
||||
timestr = last_modified_hdr
|
||||
if timestr is None:
|
||||
|
@ -432,13 +430,13 @@ def download(self, filename, info_dict, subtitle=False):
|
|||
"""
|
||||
nooverwrites_and_exists = (
|
||||
not self.params.get('overwrites', True)
|
||||
and os.path.exists(encodeFilename(filename))
|
||||
and os.path.exists(filename)
|
||||
)
|
||||
|
||||
if not hasattr(filename, 'write'):
|
||||
continuedl_and_exists = (
|
||||
self.params.get('continuedl', True)
|
||||
and os.path.isfile(encodeFilename(filename))
|
||||
and os.path.isfile(filename)
|
||||
and not self.params.get('nopart', False)
|
||||
)
|
||||
|
||||
|
@ -448,7 +446,7 @@ def download(self, filename, info_dict, subtitle=False):
|
|||
self._hook_progress({
|
||||
'filename': filename,
|
||||
'status': 'finished',
|
||||
'total_bytes': os.path.getsize(encodeFilename(filename)),
|
||||
'total_bytes': os.path.getsize(filename),
|
||||
}, info_dict)
|
||||
self._finish_multiline_status()
|
||||
return True, False
|
||||
|
@ -489,9 +487,7 @@ def _debug_cmd(self, args, exe=None):
|
|||
if not self.params.get('verbose', False):
|
||||
return
|
||||
|
||||
str_args = [decodeArgument(a) for a in args]
|
||||
|
||||
if exe is None:
|
||||
exe = os.path.basename(str_args[0])
|
||||
exe = os.path.basename(args[0])
|
||||
|
||||
self.write_debug(f'{exe} command line: {shell_quote(str_args)}')
|
||||
self.write_debug(f'{exe} command line: {shell_quote(args)}')
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
cli_valueless_option,
|
||||
determine_ext,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
find_available_port,
|
||||
remove_end,
|
||||
traverse_obj,
|
||||
|
@ -67,7 +66,7 @@ def real_download(self, filename, info_dict):
|
|||
'elapsed': time.time() - started,
|
||||
}
|
||||
if filename != '-':
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.try_rename(tmpfilename, filename)
|
||||
status.update({
|
||||
'downloaded_bytes': fsize,
|
||||
|
@ -184,9 +183,9 @@ def _call_downloader(self, tmpfilename, info_dict):
|
|||
dest.write(decrypt_fragment(fragment, src.read()))
|
||||
src.close()
|
||||
if not self.params.get('keep_fragments', False):
|
||||
self.try_remove(encodeFilename(fragment_filename))
|
||||
self.try_remove(fragment_filename)
|
||||
dest.close()
|
||||
self.try_remove(encodeFilename(f'{tmpfilename}.frag.urls'))
|
||||
self.try_remove(f'{tmpfilename}.frag.urls')
|
||||
return 0
|
||||
|
||||
def _call_process(self, cmd, info_dict):
|
||||
|
@ -620,7 +619,7 @@ def _call_downloader(self, tmpfilename, info_dict):
|
|||
args += self._configuration_args(('_o1', '_o', ''))
|
||||
|
||||
args = [encodeArgument(opt) for opt in args]
|
||||
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
|
||||
args.append(ffpp._ffmpeg_filename_argument(tmpfilename))
|
||||
self._debug_cmd(args)
|
||||
|
||||
piped = any(fmt['url'] in ('-', 'pipe:') for fmt in selected_formats)
|
||||
|
|
|
@ -9,10 +9,9 @@
|
|||
from .common import FileDownloader
|
||||
from .http import HttpFD
|
||||
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
|
||||
from ..compat import compat_os_name
|
||||
from ..networking import Request
|
||||
from ..networking.exceptions import HTTPError, IncompleteRead
|
||||
from ..utils import DownloadError, RetryManager, encodeFilename, traverse_obj
|
||||
from ..utils import DownloadError, RetryManager, traverse_obj
|
||||
from ..utils.networking import HTTPHeaderDict
|
||||
from ..utils.progress import ProgressCalculator
|
||||
|
||||
|
@ -152,7 +151,7 @@ def _append_fragment(self, ctx, frag_content):
|
|||
if self.__do_ytdl_file(ctx):
|
||||
self._write_ytdl_file(ctx)
|
||||
if not self.params.get('keep_fragments', False):
|
||||
self.try_remove(encodeFilename(ctx['fragment_filename_sanitized']))
|
||||
self.try_remove(ctx['fragment_filename_sanitized'])
|
||||
del ctx['fragment_filename_sanitized']
|
||||
|
||||
def _prepare_frag_download(self, ctx):
|
||||
|
@ -188,7 +187,7 @@ def _prepare_frag_download(self, ctx):
|
|||
})
|
||||
|
||||
if self.__do_ytdl_file(ctx):
|
||||
ytdl_file_exists = os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename'])))
|
||||
ytdl_file_exists = os.path.isfile(self.ytdl_filename(ctx['filename']))
|
||||
continuedl = self.params.get('continuedl', True)
|
||||
if continuedl and ytdl_file_exists:
|
||||
self._read_ytdl_file(ctx)
|
||||
|
@ -390,7 +389,7 @@ class FTPE(concurrent.futures.ThreadPoolExecutor):
|
|||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
pass
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
def future_result(future):
|
||||
while True:
|
||||
try:
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
ThrottledDownload,
|
||||
XAttrMetadataError,
|
||||
XAttrUnavailableError,
|
||||
encodeFilename,
|
||||
int_or_none,
|
||||
parse_http_range,
|
||||
try_call,
|
||||
|
@ -58,9 +57,8 @@ class DownloadContext(dict):
|
|||
|
||||
if self.params.get('continuedl', True):
|
||||
# Establish possible resume length
|
||||
if os.path.isfile(encodeFilename(ctx.tmpfilename)):
|
||||
ctx.resume_len = os.path.getsize(
|
||||
encodeFilename(ctx.tmpfilename))
|
||||
if os.path.isfile(ctx.tmpfilename):
|
||||
ctx.resume_len = os.path.getsize(ctx.tmpfilename)
|
||||
|
||||
ctx.is_resume = ctx.resume_len > 0
|
||||
|
||||
|
@ -241,7 +239,7 @@ def retry(e):
|
|||
ctx.resume_len = byte_counter
|
||||
else:
|
||||
try:
|
||||
ctx.resume_len = os.path.getsize(encodeFilename(ctx.tmpfilename))
|
||||
ctx.resume_len = os.path.getsize(ctx.tmpfilename)
|
||||
except FileNotFoundError:
|
||||
ctx.resume_len = 0
|
||||
raise RetryDownload(e)
|
||||
|
|
|
@ -8,7 +8,6 @@
|
|||
Popen,
|
||||
check_executable,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
get_exe_version,
|
||||
)
|
||||
|
||||
|
@ -179,7 +178,7 @@ def run_rtmpdump(args):
|
|||
return False
|
||||
|
||||
while retval in (RD_INCOMPLETE, RD_FAILED) and not test and not live:
|
||||
prevsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
prevsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'[rtmpdump] Downloaded {prevsize} bytes')
|
||||
time.sleep(5.0) # This seems to be needed
|
||||
args = [*basic_args, '--resume']
|
||||
|
@ -187,7 +186,7 @@ def run_rtmpdump(args):
|
|||
args += ['--skip', '1']
|
||||
args = [encodeArgument(a) for a in args]
|
||||
retval = run_rtmpdump(args)
|
||||
cursize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
cursize = os.path.getsize(tmpfilename)
|
||||
if prevsize == cursize and retval == RD_FAILED:
|
||||
break
|
||||
# Some rtmp streams seem abort after ~ 99.8%. Don't complain for those
|
||||
|
@ -196,7 +195,7 @@ def run_rtmpdump(args):
|
|||
retval = RD_SUCCESS
|
||||
break
|
||||
if retval == RD_SUCCESS or (test and retval == RD_INCOMPLETE):
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'[rtmpdump] Downloaded {fsize} bytes')
|
||||
self.try_rename(tmpfilename, filename)
|
||||
self._hook_progress({
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
import subprocess
|
||||
|
||||
from .common import FileDownloader
|
||||
from ..utils import check_executable, encodeFilename
|
||||
from ..utils import check_executable
|
||||
|
||||
|
||||
class RtspFD(FileDownloader):
|
||||
|
@ -26,7 +26,7 @@ def real_download(self, filename, info_dict):
|
|||
|
||||
retval = subprocess.call(args)
|
||||
if retval == 0:
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'\r[{args[0]}] {fsize} bytes')
|
||||
self.try_rename(tmpfilename, filename)
|
||||
self._hook_progress({
|
||||
|
|
|
@ -208,6 +208,10 @@
|
|||
BandcampUserIE,
|
||||
BandcampWeeklyIE,
|
||||
)
|
||||
from .bandlab import (
|
||||
BandlabIE,
|
||||
BandlabPlaylistIE,
|
||||
)
|
||||
from .bannedvideo import BannedVideoIE
|
||||
from .bbc import (
|
||||
BBCIE,
|
||||
|
@ -942,6 +946,10 @@
|
|||
from .kankanews import KankaNewsIE
|
||||
from .karaoketv import KaraoketvIE
|
||||
from .kelbyone import KelbyOneIE
|
||||
from .kenh14 import (
|
||||
Kenh14PlaylistIE,
|
||||
Kenh14VideoIE,
|
||||
)
|
||||
from .khanacademy import (
|
||||
KhanAcademyIE,
|
||||
KhanAcademyUnitIE,
|
||||
|
@ -1131,12 +1139,6 @@
|
|||
MicrosoftMediusIE,
|
||||
)
|
||||
from .microsoftstream import MicrosoftStreamIE
|
||||
from .mildom import (
|
||||
MildomClipIE,
|
||||
MildomIE,
|
||||
MildomUserVodIE,
|
||||
MildomVodIE,
|
||||
)
|
||||
from .minds import (
|
||||
MindsChannelIE,
|
||||
MindsGroupIE,
|
||||
|
@ -1518,8 +1520,8 @@
|
|||
from .philharmoniedeparis import PhilharmonieDeParisIE
|
||||
from .phoenix import PhoenixIE
|
||||
from .photobucket import PhotobucketIE
|
||||
from .pialive import PiaLiveIE
|
||||
from .piapro import PiaproIE
|
||||
from .piaulizaportal import PIAULIZAPortalIE
|
||||
from .picarto import (
|
||||
PicartoIE,
|
||||
PicartoVodIE,
|
||||
|
@ -1555,10 +1557,6 @@
|
|||
)
|
||||
from .podchaser import PodchaserIE
|
||||
from .podomatic import PodomaticIE
|
||||
from .pokemon import (
|
||||
PokemonIE,
|
||||
PokemonWatchIE,
|
||||
)
|
||||
from .pokergo import (
|
||||
PokerGoCollectionIE,
|
||||
PokerGoIE,
|
||||
|
@ -1649,6 +1647,7 @@
|
|||
RadioKapitalIE,
|
||||
RadioKapitalShowIE,
|
||||
)
|
||||
from .radioradicale import RadioRadicaleIE
|
||||
from .radiozet import RadioZetPodcastIE
|
||||
from .radlive import (
|
||||
RadLiveChannelIE,
|
||||
|
@ -2251,6 +2250,10 @@
|
|||
)
|
||||
from .ukcolumn import UkColumnIE
|
||||
from .uktvplay import UKTVPlayIE
|
||||
from .uliza import (
|
||||
UlizaPlayerIE,
|
||||
UlizaPortalIE,
|
||||
)
|
||||
from .umg import UMGDeIE
|
||||
from .unistra import UnistraIE
|
||||
from .unity import UnityIE
|
||||
|
@ -2279,10 +2282,6 @@
|
|||
from .varzesh3 import Varzesh3IE
|
||||
from .vbox7 import Vbox7IE
|
||||
from .veo import VeoIE
|
||||
from .veoh import (
|
||||
VeohIE,
|
||||
VeohUserIE,
|
||||
)
|
||||
from .vesti import VestiIE
|
||||
from .vevo import (
|
||||
VevoIE,
|
||||
|
|
|
@ -6,7 +6,6 @@
|
|||
import io
|
||||
import json
|
||||
import re
|
||||
import struct
|
||||
import time
|
||||
import urllib.parse
|
||||
import uuid
|
||||
|
@ -18,10 +17,8 @@
|
|||
from ..utils import (
|
||||
ExtractorError,
|
||||
OnDemandPagedList,
|
||||
bytes_to_intlist,
|
||||
decode_base_n,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
time_seconds,
|
||||
traverse_obj,
|
||||
update_url_query,
|
||||
|
@ -72,15 +69,15 @@ def _get_videokey_from_ticket(self, ticket):
|
|||
})
|
||||
|
||||
res = decode_base_n(license_response['k'], table=self._STRTABLE)
|
||||
encvideokey = bytes_to_intlist(struct.pack('>QQ', res >> 64, res & 0xffffffffffffffff))
|
||||
encvideokey = list(res.to_bytes(16, 'big'))
|
||||
|
||||
h = hmac.new(
|
||||
binascii.unhexlify(self._HKEY),
|
||||
(license_response['cid'] + self.ie._DEVICE_ID).encode(),
|
||||
digestmod=hashlib.sha256)
|
||||
enckey = bytes_to_intlist(h.digest())
|
||||
enckey = list(h.digest())
|
||||
|
||||
return intlist_to_bytes(aes_ecb_decrypt(encvideokey, enckey))
|
||||
return bytes(aes_ecb_decrypt(encvideokey, enckey))
|
||||
|
||||
|
||||
class AbemaTVBaseIE(InfoExtractor):
|
||||
|
|
|
@ -11,11 +11,9 @@
|
|||
from ..utils import (
|
||||
ExtractorError,
|
||||
ass_subtitles_timecode,
|
||||
bytes_to_intlist,
|
||||
bytes_to_long,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
join_nonempty,
|
||||
long_to_bytes,
|
||||
parse_iso8601,
|
||||
|
@ -198,16 +196,16 @@ def _real_extract(self, url):
|
|||
|
||||
links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link')
|
||||
self._K = ''.join(random.choices('0123456789abcdef', k=16))
|
||||
message = bytes_to_intlist(json.dumps({
|
||||
message = list(json.dumps({
|
||||
'k': self._K,
|
||||
't': token,
|
||||
}))
|
||||
}).encode())
|
||||
|
||||
# Sometimes authentication fails for no good reason, retry with
|
||||
# a different random padding
|
||||
links_data = None
|
||||
for _ in range(3):
|
||||
padded_message = intlist_to_bytes(pkcs1pad(message, 128))
|
||||
padded_message = bytes(pkcs1pad(message, 128))
|
||||
n, e = self._RSA_KEY
|
||||
encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n))
|
||||
authorization = base64.b64encode(encrypted_message).decode()
|
||||
|
|
|
@ -66,6 +66,14 @@ def _call_api(self, endpoint, display_id, data=None, headers=None, query=None):
|
|||
extensions={'legacy_ssl': True}), display_id,
|
||||
'Downloading API JSON', 'Unable to download API JSON')
|
||||
|
||||
@staticmethod
|
||||
def _fixup_thumb(thumb_url):
|
||||
if not url_or_none(thumb_url):
|
||||
return None
|
||||
# Core would determine_ext as 'php' from the url, so we need to provide the real ext
|
||||
# See: https://github.com/yt-dlp/yt-dlp/issues/11537
|
||||
return [{'url': thumb_url, 'ext': 'jpg'}]
|
||||
|
||||
|
||||
class AfreecaTVIE(AfreecaTVBaseIE):
|
||||
IE_NAME = 'soop'
|
||||
|
@ -155,7 +163,7 @@ def _real_extract(self, url):
|
|||
'uploader': ('writer_nick', {str}),
|
||||
'uploader_id': ('bj_id', {str}),
|
||||
'duration': ('total_file_duration', {int_or_none(scale=1000)}),
|
||||
'thumbnail': ('thumb', {url_or_none}),
|
||||
'thumbnails': ('thumb', {self._fixup_thumb}),
|
||||
})
|
||||
|
||||
entries = []
|
||||
|
@ -226,8 +234,7 @@ def _real_extract(self, url):
|
|||
|
||||
return self.playlist_result(self._entries(data), video_id)
|
||||
|
||||
@staticmethod
|
||||
def _entries(data):
|
||||
def _entries(self, data):
|
||||
# 'files' is always a list with 1 element
|
||||
yield from traverse_obj(data, (
|
||||
'data', lambda _, v: v['story_type'] == 'catch',
|
||||
|
@ -238,7 +245,7 @@ def _entries(data):
|
|||
'title': ('title', {str}),
|
||||
'uploader': ('writer_nick', {str}),
|
||||
'uploader_id': ('writer_id', {str}),
|
||||
'thumbnail': ('thumb', {url_or_none}),
|
||||
'thumbnails': ('thumb', {self._fixup_thumb}),
|
||||
'timestamp': ('write_timestamp', {int_or_none}),
|
||||
}))
|
||||
|
||||
|
|
|
@ -8,10 +8,8 @@
|
|||
from .common import InfoExtractor
|
||||
from ..aes import aes_encrypt
|
||||
from ..utils import (
|
||||
bytes_to_intlist,
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
join_nonempty,
|
||||
smuggle_url,
|
||||
strip_jsonp,
|
||||
|
@ -234,8 +232,8 @@ def _get_video_json(self, access_key, video_id, extracted_token):
|
|||
server_time = self._server_time(access_key, video_id)
|
||||
input_data = f'{server_time}~{md5_text(video_data_url)}~{md5_text(server_time)}'
|
||||
|
||||
auth_secret = intlist_to_bytes(aes_encrypt(
|
||||
bytes_to_intlist(input_data[:64]), bytes_to_intlist(self._AUTH_KEY)))
|
||||
auth_secret = bytes(aes_encrypt(
|
||||
list(input_data[:64].encode()), list(self._AUTH_KEY)))
|
||||
query = {
|
||||
'X-Anvato-Adst-Auth': base64.b64encode(auth_secret).decode('ascii'),
|
||||
'rtyp': 'fp',
|
||||
|
|
437
yt_dlp/extractor/bandlab.py
Normal file
437
yt_dlp/extractor/bandlab.py
Normal file
|
@ -0,0 +1,437 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
format_field,
|
||||
int_or_none,
|
||||
parse_iso8601,
|
||||
parse_qs,
|
||||
truncate_string,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj, value
|
||||
|
||||
|
||||
class BandlabBaseIE(InfoExtractor):
|
||||
def _call_api(self, endpoint, asset_id, **kwargs):
|
||||
headers = kwargs.pop('headers', None) or {}
|
||||
return self._download_json(
|
||||
f'https://www.bandlab.com/api/v1.3/{endpoint}/{asset_id}',
|
||||
asset_id, headers={
|
||||
'accept': 'application/json',
|
||||
'referer': 'https://www.bandlab.com/',
|
||||
'x-client-id': 'BandLab-Web',
|
||||
'x-client-version': '10.1.124',
|
||||
**headers,
|
||||
}, **kwargs)
|
||||
|
||||
def _parse_revision(self, revision_data, url=None):
|
||||
return {
|
||||
'vcodec': 'none',
|
||||
'media_type': 'revision',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(revision_data, {
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/revision/%s')}), filter, any),
|
||||
'id': (('revisionId', 'id'), {str}, any),
|
||||
'title': ('song', 'name', {str}),
|
||||
'track': ('song', 'name', {str}),
|
||||
'url': ('mixdown', 'file', {url_or_none}),
|
||||
'thumbnail': ('song', 'picture', 'url', {url_or_none}),
|
||||
'description': ('description', {str}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
'duration': ('mixdown', 'duration', {float_or_none}),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'genres': ('genres', ..., 'name', {str}),
|
||||
}),
|
||||
}
|
||||
|
||||
def _parse_track(self, track_data, url=None):
|
||||
return {
|
||||
'vcodec': 'none',
|
||||
'media_type': 'track',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(track_data, {
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/post/%s')}), filter, any),
|
||||
'id': (('revisionId', 'id'), {str}, any),
|
||||
'url': ('track', 'sample', 'audioUrl', {url_or_none}),
|
||||
'title': ('track', 'name', {str}),
|
||||
'track': ('track', 'name', {str}),
|
||||
'description': ('caption', {str}),
|
||||
'thumbnail': ('track', 'picture', ('original', 'url'), {url_or_none}, any),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'duration': ('track', 'sample', 'duration', {float_or_none}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
}),
|
||||
}
|
||||
|
||||
def _parse_video(self, video_data, url=None):
|
||||
return {
|
||||
'media_type': 'video',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(video_data, {
|
||||
'id': ('id', {str}),
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/post/%s')}), filter, any),
|
||||
'url': ('video', 'url', {url_or_none}),
|
||||
'title': ('caption', {lambda x: x.replace('\n', ' ')}, {truncate_string(left=50)}),
|
||||
'description': ('caption', {str}),
|
||||
'thumbnail': ('video', 'picture', 'url', {url_or_none}),
|
||||
'view_count': ('video', 'counters', 'plays', {int_or_none}),
|
||||
'like_count': ('video', 'counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'duration': ('video', 'duration', {float_or_none}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
}),
|
||||
}
|
||||
|
||||
|
||||
class BandlabIE(BandlabBaseIE):
|
||||
_VALID_URL = [
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<url_type>track|post|revision)/(?P<id>[\da-f_-]+)',
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<url_type>embed)/\?(?:[^#]*&)?id=(?P<id>[\da-f-]+)',
|
||||
]
|
||||
_EMBED_REGEX = [rf'<iframe[^>]+src=[\'"](?P<url>{_VALID_URL[1]})[\'"]']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.bandlab.com/track/04b37e88dba24967b9dac8eb8567ff39_07d7f906fc96ee11b75e000d3a428fff',
|
||||
'md5': '46f7b43367dd268bbcf0bbe466753b2c',
|
||||
'info_dict': {
|
||||
'id': '02d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'ext': 'm4a',
|
||||
'uploader_id': 'ender_milze',
|
||||
'track': 'sweet black',
|
||||
'description': 'composed by juanjn3737',
|
||||
'timestamp': 1702171963,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'duration': 54.629999999999995,
|
||||
'title': 'sweet black',
|
||||
'upload_date': '20231210',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/fa082beb-b856-4730-9170-a57e4e32cc2c/',
|
||||
'genres': ['Lofi'],
|
||||
'uploader': 'ender milze',
|
||||
'comment_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Same track as above but post URL
|
||||
'url': 'https://www.bandlab.com/post/07d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'md5': '46f7b43367dd268bbcf0bbe466753b2c',
|
||||
'info_dict': {
|
||||
'id': '02d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'ext': 'm4a',
|
||||
'uploader_id': 'ender_milze',
|
||||
'track': 'sweet black',
|
||||
'description': 'composed by juanjn3737',
|
||||
'timestamp': 1702171973,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'duration': 54.629999999999995,
|
||||
'title': 'sweet black',
|
||||
'upload_date': '20231210',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/fa082beb-b856-4730-9170-a57e4e32cc2c/',
|
||||
'genres': ['Lofi'],
|
||||
'uploader': 'ender milze',
|
||||
'comment_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# SharedKey Example
|
||||
'url': 'https://www.bandlab.com/track/048916c2-c6da-ee11-85f9-6045bd2e11f9?sharedKey=0NNWX8qYAEmI38lWAzCNDA',
|
||||
'md5': '15174b57c44440e2a2008be9cae00250',
|
||||
'info_dict': {
|
||||
'id': '038916c2-c6da-ee11-85f9-6045bd2e11f9',
|
||||
'ext': 'm4a',
|
||||
'comment_count': int,
|
||||
'genres': ['Other'],
|
||||
'uploader_id': 'user8353034818103753',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/51b18363-da23-4b9b-a29c-2933a3e561ca/',
|
||||
'timestamp': 1709625771,
|
||||
'track': 'PodcastMaerchen4b',
|
||||
'duration': 468.14,
|
||||
'view_count': int,
|
||||
'description': 'Podcast: Neues aus der Märchenwelt',
|
||||
'like_count': int,
|
||||
'upload_date': '20240305',
|
||||
'uploader': 'Erna Wageneder',
|
||||
'title': 'PodcastMaerchen4b',
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Different Revision selected
|
||||
'url': 'https://www.bandlab.com/track/130343fc-148b-ea11-96d2-0003ffd1fc09?revId=110343fc-148b-ea11-96d2-0003ffd1fc09',
|
||||
'md5': '74e055ef9325d63f37088772fbfe4454',
|
||||
'info_dict': {
|
||||
'id': '110343fc-148b-ea11-96d2-0003ffd1fc09',
|
||||
'ext': 'm4a',
|
||||
'timestamp': 1588273294,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/users/b612e533-e4f7-4542-9f50-3fcfd8dd822c/',
|
||||
'description': 'Final Revision.',
|
||||
'title': 'Replay ( Instrumental)',
|
||||
'uploader': 'David R Sparks',
|
||||
'uploader_id': 'davesnothome69',
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'track': 'Replay ( Instrumental)',
|
||||
'genres': ['Rock'],
|
||||
'upload_date': '20200430',
|
||||
'like_count': int,
|
||||
'duration': 279.43,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Video
|
||||
'url': 'https://www.bandlab.com/post/5cdf9036-3857-ef11-991a-6045bd36e0d9',
|
||||
'md5': '8caa2ef28e86c1dacf167293cfdbeba9',
|
||||
'info_dict': {
|
||||
'id': '5cdf9036-3857-ef11-991a-6045bd36e0d9',
|
||||
'ext': 'mp4',
|
||||
'duration': 44.705,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/videos/67c6cef1-cef6-40d3-831e-a55bc1dcb972/',
|
||||
'comment_count': int,
|
||||
'title': 'backing vocals',
|
||||
'uploader_id': 'marliashya',
|
||||
'uploader': 'auraa',
|
||||
'like_count': int,
|
||||
'description': 'backing vocals',
|
||||
'media_type': 'video',
|
||||
},
|
||||
}, {
|
||||
# Embed Example
|
||||
'url': 'https://www.bandlab.com/embed/?blur=false&id=014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'md5': 'a4ad05cb68c54faaed9b0a8453a8cf4a',
|
||||
'info_dict': {
|
||||
'id': '014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'ext': 'm4a',
|
||||
'comment_count': int,
|
||||
'genres': ['Electronic'],
|
||||
'uploader': 'Charlie Henson',
|
||||
'timestamp': 1587328674,
|
||||
'upload_date': '20200419',
|
||||
'view_count': int,
|
||||
'track': 'Positronic Meltdown',
|
||||
'duration': 318.55,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/87165bc3-5439-496e-b1f7-a9f13b541ff2/',
|
||||
'description': 'Checkout my tracks at AOMX http://aomxsounds.com/',
|
||||
'uploader_id': 'microfreaks',
|
||||
'title': 'Positronic Meltdown',
|
||||
'like_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Track without revisions available
|
||||
'url': 'https://www.bandlab.com/track/55767ac51789ea11a94c0003ffd1fc09_2f007b0a37b94ec7a69bc25ae15108a5',
|
||||
'md5': 'f05d68a3769952c2d9257c473e14c15f',
|
||||
'info_dict': {
|
||||
'id': '55767ac51789ea11a94c0003ffd1fc09_2f007b0a37b94ec7a69bc25ae15108a5',
|
||||
'ext': 'm4a',
|
||||
'track': 'insame',
|
||||
'like_count': int,
|
||||
'duration': 84.03,
|
||||
'title': 'insame',
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'uploader': 'Sorakime',
|
||||
'uploader_id': 'sorakime',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/users/572a351a-0f3a-4c6a-ac39-1a5defdeeb1c/',
|
||||
'timestamp': 1691162128,
|
||||
'upload_date': '20230804',
|
||||
'media_type': 'track',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/revision/014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_WEBPAGE_TESTS = [{
|
||||
'url': 'https://phantomluigi.github.io/',
|
||||
'info_dict': {
|
||||
'id': 'e14223c3-7871-ef11-bdfd-000d3a980db3',
|
||||
'ext': 'm4a',
|
||||
'view_count': int,
|
||||
'upload_date': '20240913',
|
||||
'uploader_id': 'phantommusicofficial',
|
||||
'timestamp': 1726194897,
|
||||
'uploader': 'Phantom',
|
||||
'comment_count': int,
|
||||
'genres': ['Progresive Rock'],
|
||||
'description': 'md5:a38cd668f7a2843295ef284114f18429',
|
||||
'duration': 225.23,
|
||||
'like_count': int,
|
||||
'title': 'Vermilion Pt. 2 (Cover)',
|
||||
'track': 'Vermilion Pt. 2 (Cover)',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/62b10750-7aef-4f42-ad08-1af52f577e97/',
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id, url_type = self._match_valid_url(url).group('id', 'url_type')
|
||||
|
||||
qs = parse_qs(url)
|
||||
revision_id = traverse_obj(qs, (('revId', 'id'), 0, any))
|
||||
if url_type == 'revision':
|
||||
revision_id = display_id
|
||||
|
||||
revision_data = None
|
||||
if not revision_id:
|
||||
post_data = self._call_api(
|
||||
'posts', display_id, note='Downloading post data',
|
||||
query=traverse_obj(qs, {'sharedKey': ('sharedKey', 0)}))
|
||||
|
||||
revision_id = traverse_obj(post_data, (('revisionId', ('revision', 'id')), {str}, any))
|
||||
revision_data = traverse_obj(post_data, ('revision', {dict}))
|
||||
|
||||
if not revision_data and not revision_id:
|
||||
post_type = post_data.get('type')
|
||||
if post_type == 'Video':
|
||||
return self._parse_video(post_data, url=url)
|
||||
if post_type == 'Track':
|
||||
return self._parse_track(post_data, url=url)
|
||||
raise ExtractorError(f'Could not extract data for post type {post_type!r}')
|
||||
|
||||
if not revision_data:
|
||||
revision_data = self._call_api(
|
||||
'revisions', revision_id, note='Downloading revision data', query={'edit': 'false'})
|
||||
|
||||
return self._parse_revision(revision_data, url=url)
|
||||
|
||||
|
||||
class BandlabPlaylistIE(BandlabBaseIE):
|
||||
_VALID_URL = [
|
||||
r'https?://(?:www\.)?bandlab.com/(?:[\w]+/)?(?P<type>albums|collections)/(?P<id>[\da-f-]+)',
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<type>embed)/collection/\?(?:[^#]*&)?id=(?P<id>[\da-f-]+)',
|
||||
]
|
||||
_EMBED_REGEX = [rf'<iframe[^>]+src=[\'"](?P<url>{_VALID_URL[1]})[\'"]']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.bandlab.com/davesnothome69/albums/89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'info_dict': {
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/69507ff3-579a-45be-afca-9e87eddec944/',
|
||||
'release_date': '20221003',
|
||||
'title': 'Remnants',
|
||||
'album': 'Remnants',
|
||||
'like_count': int,
|
||||
'album_type': 'LP',
|
||||
'description': 'A collection of some feel good, rock hits.',
|
||||
'comment_count': int,
|
||||
'view_count': int,
|
||||
'id': '89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'uploader': 'David R Sparks',
|
||||
'uploader_id': 'davesnothome69',
|
||||
},
|
||||
'playlist_count': 10,
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/slytheband/collections/955102d4-1040-ef11-86c3-000d3a42581b',
|
||||
'info_dict': {
|
||||
'id': '955102d4-1040-ef11-86c3-000d3a42581b',
|
||||
'timestamp': 1720762659,
|
||||
'view_count': int,
|
||||
'title': 'My Shit 🖤',
|
||||
'uploader_id': 'slytheband',
|
||||
'uploader': '𝓢𝓛𝓨',
|
||||
'upload_date': '20240712',
|
||||
'like_count': int,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/collections/2c64ca12-b180-4b76-8587-7a8da76bddc8/',
|
||||
},
|
||||
'playlist_count': 15,
|
||||
}, {
|
||||
# Embeds can contain both albums and collections with the same URL pattern. This is an album
|
||||
'url': 'https://www.bandlab.com/embed/collection/?id=12cc6f7f-951b-ee11-907c-00224844f303',
|
||||
'info_dict': {
|
||||
'id': '12cc6f7f-951b-ee11-907c-00224844f303',
|
||||
'release_date': '20230706',
|
||||
'description': 'This is a collection of songs I created when I had an Amiga computer.',
|
||||
'view_count': int,
|
||||
'title': 'Mark Salud The Amiga Collection',
|
||||
'uploader_id': 'mssirmooth1962',
|
||||
'comment_count': int,
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/d618bd7b-0537-40d5-bdd8-61b066e77d59/',
|
||||
'like_count': int,
|
||||
'uploader': 'Mark Salud',
|
||||
'album': 'Mark Salud The Amiga Collection',
|
||||
'album_type': 'LP',
|
||||
},
|
||||
'playlist_count': 24,
|
||||
}, {
|
||||
# Tracks without revision id
|
||||
'url': 'https://www.bandlab.com/embed/collection/?id=e98aafb5-d932-ee11-b8f0-00224844c719',
|
||||
'info_dict': {
|
||||
'like_count': int,
|
||||
'uploader_id': 'sorakime',
|
||||
'comment_count': int,
|
||||
'uploader': 'Sorakime',
|
||||
'view_count': int,
|
||||
'description': 'md5:4ec31c568a5f5a5a2b17572ea64c3825',
|
||||
'release_date': '20230812',
|
||||
'title': 'Art',
|
||||
'album': 'Art',
|
||||
'album_type': 'Album',
|
||||
'id': 'e98aafb5-d932-ee11-b8f0-00224844c719',
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/20c890de-e94a-4422-828a-2da6377a13c8/',
|
||||
},
|
||||
'playlist_count': 13,
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/albums/89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _entries(self, album_data):
|
||||
for post in traverse_obj(album_data, ('posts', lambda _, v: v['type'])):
|
||||
post_type = post['type']
|
||||
if post_type == 'Revision':
|
||||
yield self._parse_revision(post.get('revision'))
|
||||
elif post_type == 'Track':
|
||||
yield self._parse_track(post)
|
||||
elif post_type == 'Video':
|
||||
yield self._parse_video(post)
|
||||
else:
|
||||
self.report_warning(f'Skipping unknown post type: "{post_type}"')
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id, playlist_type = self._match_valid_url(url).group('id', 'type')
|
||||
|
||||
endpoints = {
|
||||
'albums': ['albums'],
|
||||
'collections': ['collections'],
|
||||
'embed': ['collections', 'albums'],
|
||||
}.get(playlist_type)
|
||||
for endpoint in endpoints:
|
||||
playlist_data = self._call_api(
|
||||
endpoint, playlist_id, note=f'Downloading {endpoint[:-1]} data',
|
||||
fatal=False, expected_status=404)
|
||||
if not playlist_data.get('errorCode'):
|
||||
playlist_type = endpoint
|
||||
break
|
||||
if error_code := playlist_data.get('errorCode'):
|
||||
raise ExtractorError(f'Could not find playlist data. Error code: "{error_code}"')
|
||||
|
||||
return self.playlist_result(
|
||||
self._entries(playlist_data), playlist_id,
|
||||
**traverse_obj(playlist_data, {
|
||||
'title': ('name', {str}),
|
||||
'description': ('description', {str}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
'release_date': ('releaseDate', {lambda x: x.replace('-', '')}, filter),
|
||||
'thumbnail': ('picture', ('original', 'url'), {url_or_none}, any),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
}),
|
||||
**(traverse_obj(playlist_data, {
|
||||
'album': ('name', {str}),
|
||||
'album_type': ('type', {str}),
|
||||
}) if playlist_type == 'albums' else {}))
|
|
@ -5,6 +5,7 @@
|
|||
ExtractorError,
|
||||
lowercase_escape,
|
||||
url_or_none,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
|
@ -40,14 +41,48 @@ class ChaturbateIE(InfoExtractor):
|
|||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_ROOM_OFFLINE = 'Room is currently offline'
|
||||
_ERROR_MAP = {
|
||||
'offline': 'Room is currently offline',
|
||||
'private': 'Room is currently in a private show',
|
||||
'away': 'Performer is currently away',
|
||||
'password protected': 'Room is password protected',
|
||||
'hidden': 'Hidden session in progress',
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, tld = self._match_valid_url(url).group('id', 'tld')
|
||||
def _extract_from_api(self, video_id, tld):
|
||||
response = self._download_json(
|
||||
f'https://chaturbate.{tld}/get_edge_hls_url_ajax/', video_id,
|
||||
data=urlencode_postdata({'room_slug': video_id}),
|
||||
headers={
|
||||
**self.geo_verification_headers(),
|
||||
'X-Requested-With': 'XMLHttpRequest',
|
||||
'Accept': 'application/json',
|
||||
}, fatal=False, impersonate=True) or {}
|
||||
|
||||
status = response.get('room_status')
|
||||
if status != 'public':
|
||||
if error := self._ERROR_MAP.get(status):
|
||||
raise ExtractorError(error, expected=True)
|
||||
self.report_warning('Falling back to webpage extraction')
|
||||
return None
|
||||
|
||||
m3u8_url = response.get('url')
|
||||
if not m3u8_url:
|
||||
self.raise_geo_restricted()
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': video_id,
|
||||
'thumbnail': f'https://roomimg.stream.highwebmedia.com/ri/{video_id}.jpg',
|
||||
'is_live': True,
|
||||
'age_limit': 18,
|
||||
'formats': self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True),
|
||||
}
|
||||
|
||||
def _extract_from_html(self, video_id, tld):
|
||||
webpage = self._download_webpage(
|
||||
f'https://chaturbate.{tld}/{video_id}/', video_id,
|
||||
headers=self.geo_verification_headers())
|
||||
headers=self.geo_verification_headers(), impersonate=True)
|
||||
|
||||
found_m3u8_urls = []
|
||||
|
||||
|
@ -85,8 +120,8 @@ def _real_extract(self, url):
|
|||
webpage, 'error', group='error', default=None)
|
||||
if not error:
|
||||
if any(p in webpage for p in (
|
||||
self._ROOM_OFFLINE, 'offline_tipping', 'tip_offline')):
|
||||
error = self._ROOM_OFFLINE
|
||||
self._ERROR_MAP['offline'], 'offline_tipping', 'tip_offline')):
|
||||
error = self._ERROR_MAP['offline']
|
||||
if error:
|
||||
raise ExtractorError(error, expected=True)
|
||||
raise ExtractorError('Unable to find stream URL')
|
||||
|
@ -113,3 +148,7 @@ def _real_extract(self, url):
|
|||
'is_live': True,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, tld = self._match_valid_url(url).group('id', 'tld')
|
||||
return self._extract_from_api(video_id, tld) or self._extract_from_html(video_id, tld)
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
from ..compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_expanduser,
|
||||
compat_os_name,
|
||||
urllib_req_to_req,
|
||||
)
|
||||
from ..cookies import LenientSimpleCookie
|
||||
|
@ -279,6 +278,7 @@ class InfoExtractor:
|
|||
thumbnails: A list of dictionaries, with the following entries:
|
||||
* "id" (optional, string) - Thumbnail format ID
|
||||
* "url"
|
||||
* "ext" (optional, string) - actual image extension if not given in URL
|
||||
* "preference" (optional, int) - quality of the image
|
||||
* "width" (optional, int)
|
||||
* "height" (optional, int)
|
||||
|
@ -1028,7 +1028,7 @@ def _request_dump_filename(self, url, video_id, data=None):
|
|||
filename = sanitize_filename(f'{basen}.dump', restricted=True)
|
||||
# Working around MAX_PATH limitation on Windows (see
|
||||
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx)
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
absfilepath = os.path.abspath(filename)
|
||||
if len(absfilepath) > 259:
|
||||
filename = fR'\\?\{absfilepath}'
|
||||
|
@ -3767,7 +3767,7 @@ def _merge_subtitles(cls, *dicts, target=None):
|
|||
""" Merge subtitle dictionaries, language by language. """
|
||||
if target is None:
|
||||
target = {}
|
||||
for d in dicts:
|
||||
for d in filter(None, dicts):
|
||||
for lang, subs in d.items():
|
||||
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
|
||||
return target
|
||||
|
|
|
@ -1,14 +1,27 @@
|
|||
import json
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import orderedSet
|
||||
from .ninecninemedia import NineCNineMediaIE
|
||||
from ..utils import extract_attributes, orderedSet
|
||||
from ..utils.traversal import find_element, traverse_obj
|
||||
|
||||
|
||||
class CTVNewsIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P<id>[0-9.]+)'
|
||||
_BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/'
|
||||
_VIDEO_ID_RE = r'(?P<id>\d{5,})'
|
||||
_PLAYLIST_ID_RE = r'(?P<id>\d\.\d{5,})'
|
||||
_VALID_URL = [
|
||||
rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}',
|
||||
rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}',
|
||||
rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}',
|
||||
rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])',
|
||||
rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}',
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'http://www.ctvnews.ca/video?clipId=901995',
|
||||
'md5': '9b8624ba66351a23e0b6e1391971f9af',
|
||||
'md5': 'b608f466c7fa24b9666c6439d766ab7e',
|
||||
'info_dict': {
|
||||
'id': '901995',
|
||||
'ext': 'flv',
|
||||
|
@ -16,6 +29,33 @@ class CTVNewsIE(InfoExtractor):
|
|||
'description': 'md5:958dd3b4f5bbbf0ed4d045c790d89285',
|
||||
'timestamp': 1467286284,
|
||||
'upload_date': '20160630',
|
||||
'categories': [],
|
||||
'season_number': 0,
|
||||
'season': 'Season 0',
|
||||
'tags': [],
|
||||
'series': 'CTV News National | Archive | Stories 2',
|
||||
'season_id': '57981',
|
||||
'thumbnail': r're:https?://.*\.jpg$',
|
||||
'duration': 764.631,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429',
|
||||
'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5',
|
||||
'info_dict': {
|
||||
'id': '3030933',
|
||||
'ext': 'flv',
|
||||
'title': 'Here’s what’s making news for Nov. 15',
|
||||
'description': 'Here are the top stories we’re working on for CTV News at 11 for Nov. 15',
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg',
|
||||
'season_id': '58104',
|
||||
'season_number': 0,
|
||||
'tags': [],
|
||||
'season': 'Season 0',
|
||||
'categories': [],
|
||||
'series': 'CTV News Barrie',
|
||||
'upload_date': '20241116',
|
||||
'duration': 42.943,
|
||||
'timestamp': 1731722452,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224',
|
||||
|
@ -31,6 +71,72 @@ class CTVNewsIE(InfoExtractor):
|
|||
'id': '1.2876780',
|
||||
},
|
||||
'playlist_mincount': 100,
|
||||
}, {
|
||||
'url': 'https://www.ctvnews.ca/it-s-been-23-years-since-toronto-called-in-the-army-after-a-major-snowstorm-1.5736957',
|
||||
'info_dict':
|
||||
{
|
||||
'id': '1.5736957',
|
||||
},
|
||||
'playlist_mincount': 6,
|
||||
}, {
|
||||
'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797',
|
||||
'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c',
|
||||
'info_dict': {
|
||||
'id': '2695026',
|
||||
'ext': 'flv',
|
||||
'season_id': '89852',
|
||||
'series': 'From CTV News Channel',
|
||||
'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a',
|
||||
'season': '2023',
|
||||
'title': 'Bank of Canada asks public about digital currency',
|
||||
'categories': [],
|
||||
'tags': [],
|
||||
'upload_date': '20230526',
|
||||
'season_number': 2023,
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'timestamp': 1685105157,
|
||||
'duration': 253.553,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589',
|
||||
'md5': '135cc592df607d29dddc931f1b756ae2',
|
||||
'info_dict': {
|
||||
'id': '582589',
|
||||
'ext': 'flv',
|
||||
'categories': [],
|
||||
'timestamp': 1427906183,
|
||||
'season_number': 0,
|
||||
'duration': 125.559,
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'series': 'CTV News Stox',
|
||||
'description': 'CTV original footage of the rise and fall of the Berlin Wall.',
|
||||
'title': 'Berlin Wall',
|
||||
'season_id': '63817',
|
||||
'season': 'Season 0',
|
||||
'tags': [],
|
||||
'upload_date': '20150401',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759',
|
||||
'md5': 'a14c0603557decc6531260791c23cc5e',
|
||||
'info_dict': {
|
||||
'id': '3023759',
|
||||
'ext': 'flv',
|
||||
'season_number': 2024,
|
||||
'timestamp': 1731798000,
|
||||
'season': '2024',
|
||||
'episode': 'Episode 125',
|
||||
'description': 'CTV News Ottawa at Six',
|
||||
'duration': 2712.076,
|
||||
'episode_number': 125,
|
||||
'upload_date': '20241116',
|
||||
'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024',
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'categories': [],
|
||||
'tags': [],
|
||||
'series': 'CTV News Ottawa at Six',
|
||||
'season_id': '92667',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.ctvnews.ca/1.810401',
|
||||
'only_matching': True,
|
||||
|
@ -42,29 +148,35 @@ class CTVNewsIE(InfoExtractor):
|
|||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _ninecninemedia_url_result(self, clip_id):
|
||||
return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id)
|
||||
|
||||
def _real_extract(self, url):
|
||||
page_id = self._match_id(url)
|
||||
|
||||
def ninecninemedia_url_result(clip_id):
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'id': clip_id,
|
||||
'url': f'9c9media:ctvnews_web:{clip_id}',
|
||||
'ie_key': 'NineCNineMedia',
|
||||
}
|
||||
if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment):
|
||||
page_id = mobj.group('id')
|
||||
|
||||
if page_id.isdigit():
|
||||
return ninecninemedia_url_result(page_id)
|
||||
else:
|
||||
webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={
|
||||
'ot': 'example.AjaxPageLayout.ot',
|
||||
'maxItemsPerPage': 1000000,
|
||||
})
|
||||
entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet(
|
||||
re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
|
||||
if not entries:
|
||||
webpage = self._download_webpage(url, page_id)
|
||||
if 'getAuthStates("' in webpage:
|
||||
entries = [ninecninemedia_url_result(clip_id) for clip_id in
|
||||
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
|
||||
return self.playlist_result(entries, page_id)
|
||||
if re.fullmatch(self._VIDEO_ID_RE, page_id):
|
||||
return self._ninecninemedia_url_result(page_id)
|
||||
|
||||
webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={
|
||||
'ot': 'example.AjaxPageLayout.ot',
|
||||
'maxItemsPerPage': 1000000,
|
||||
})
|
||||
entries = [self._ninecninemedia_url_result(clip_id)
|
||||
for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
|
||||
if not entries:
|
||||
webpage = self._download_webpage(url, page_id)
|
||||
if 'getAuthStates("' in webpage:
|
||||
entries = [self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
|
||||
else:
|
||||
entries = [
|
||||
self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||
traverse_obj(webpage, (
|
||||
{find_element(tag='jasper-player-container', html=True)},
|
||||
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str}))
|
||||
]
|
||||
|
||||
return self.playlist_result(entries, page_id)
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
jwt_decode_hs256,
|
||||
parse_codecs,
|
||||
try_get,
|
||||
url_or_none,
|
||||
|
@ -13,9 +16,6 @@
|
|||
class DigitalConcertHallIE(InfoExtractor):
|
||||
IE_DESC = 'DigitalConcertHall extractor'
|
||||
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_ACCESS_TOKEN = None
|
||||
_NETRC_MACHINE = 'digitalconcerthall'
|
||||
_TESTS = [{
|
||||
'note': 'Playlist with only one video',
|
||||
|
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
|
|||
'params': {'skip_download': 'm3u8'},
|
||||
'playlist_count': 1,
|
||||
}]
|
||||
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
|
||||
'is the "access_token_production" from your browser local storage')
|
||||
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_CLIENT_ID = 'dch.webapp'
|
||||
_CLIENT_SECRET = '2ySLN+2Fwb'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_OAUTH_HEADERS = {
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Origin': 'https://www.digitalconcerthall.com',
|
||||
'Referer': 'https://www.digitalconcerthall.com/',
|
||||
'User-Agent': _USER_AGENT,
|
||||
}
|
||||
_access_token = None
|
||||
_access_token_expiry = 0
|
||||
_refresh_token = None
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_token = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
|
||||
@property
|
||||
def _access_token_is_expired(self):
|
||||
return self._access_token_expiry - 30 <= int(time.time())
|
||||
|
||||
def _set_access_token(self, value):
|
||||
self._access_token = value
|
||||
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
|
||||
|
||||
def _cache_tokens(self, /):
|
||||
self.cache.store(self._NETRC_MACHINE, 'tokens', {
|
||||
'access_token': self._access_token,
|
||||
'refresh_token': self._refresh_token,
|
||||
})
|
||||
|
||||
def _fetch_new_tokens(self, invalidate=False):
|
||||
if invalidate:
|
||||
self.report_warning('Access token has been invalidated')
|
||||
self._set_access_token(None)
|
||||
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
if not self._refresh_token:
|
||||
self._set_access_token(None)
|
||||
self._cache_tokens()
|
||||
raise ExtractorError(
|
||||
'Access token has expired or been invalidated. '
|
||||
'Get a new "access_token_production" value from your browser '
|
||||
f'and try again, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
|
||||
bearer_token = self._access_token or self._download_json(
|
||||
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
|
||||
data=urlencode_postdata({
|
||||
'affiliate': 'none',
|
||||
'grant_type': 'device',
|
||||
'device_vendor': 'unknown',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
|
||||
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
|
||||
'app_id': 'dch.webapp',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
|
||||
# but this is no longer effective since actual login is not possible anymore
|
||||
'device_model': 'unknown',
|
||||
'app_id': self._CLIENT_ID,
|
||||
'app_distributor': 'berlinphil',
|
||||
'app_version': '1.84.0',
|
||||
'client_secret': '2ySLN+2Fwb',
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})['access_token']
|
||||
'app_version': '1.95.0',
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers=self._OAUTH_HEADERS)['access_token']
|
||||
|
||||
try:
|
||||
login_response = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
|
||||
'grant_type': 'password',
|
||||
'username': username,
|
||||
'password': password,
|
||||
response = self._download_json(
|
||||
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
|
||||
data=urlencode_postdata({
|
||||
'grant_type': 'refresh_token',
|
||||
'refresh_token': self._refresh_token,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Referer': 'https://www.digitalconcerthall.com',
|
||||
'Authorization': f'Bearer {login_token}',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
**self._OAUTH_HEADERS,
|
||||
'Authorization': f'Bearer {bearer_token}',
|
||||
})
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
raise ExtractorError('Invalid username or password', expected=True)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
self._set_access_token(None)
|
||||
self._refresh_token = None
|
||||
self._cache_tokens()
|
||||
raise ExtractorError('Your tokens have been invalidated', expected=True)
|
||||
raise
|
||||
self._ACCESS_TOKEN = login_response['access_token']
|
||||
|
||||
self._set_access_token(response['access_token'])
|
||||
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
|
||||
self.write_debug('New refresh token granted')
|
||||
self._refresh_token = refresh_token
|
||||
self._cache_tokens()
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
self.report_login()
|
||||
|
||||
if username == 'refresh':
|
||||
self._refresh_token = password
|
||||
self._fetch_new_tokens()
|
||||
|
||||
if username == 'token':
|
||||
if not traverse_obj(password, {jwt_decode_hs256}):
|
||||
raise ExtractorError(
|
||||
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
|
||||
self._set_access_token(password)
|
||||
self._cache_tokens()
|
||||
|
||||
if username in ('refresh', 'token'):
|
||||
if self.get_param('cachedir') is not False:
|
||||
token_type = 'access' if username == 'token' else 'refresh'
|
||||
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
|
||||
'token next time, pass --username cache along with any password')
|
||||
return
|
||||
|
||||
if username != 'cache':
|
||||
raise ExtractorError(
|
||||
'Login with username and password is no longer supported '
|
||||
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# Try cached access_token
|
||||
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
|
||||
self._set_access_token(cached_tokens.get('access_token'))
|
||||
self._refresh_token = cached_tokens.get('refresh_token')
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
# Try cached refresh_token
|
||||
self._fetch_new_tokens(invalidate=True)
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._ACCESS_TOKEN:
|
||||
self.raise_login_required(method='password')
|
||||
if not self._access_token:
|
||||
self.raise_login_required(
|
||||
'All content on this site is only available for registered users. '
|
||||
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
|
||||
|
||||
def _entries(self, items, language, type_, **kwargs):
|
||||
for item in items:
|
||||
video_id = item['id']
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
|
||||
for should_retry in (True, False):
|
||||
self._fetch_new_tokens(invalidate=not should_retry)
|
||||
try:
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._access_token}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
break
|
||||
except ExtractorError as error:
|
||||
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
continue
|
||||
raise
|
||||
|
||||
formats = []
|
||||
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
||||
|
@ -157,7 +255,6 @@ def _real_extract(self, url):
|
|||
'Accept': 'application/json',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
})
|
||||
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
||||
|
||||
|
|
|
@ -569,7 +569,7 @@ def extract_dash_manifest(vid_data, formats, mpd_url=None):
|
|||
if dash_manifest:
|
||||
formats.extend(self._parse_mpd_formats(
|
||||
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
|
||||
mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url))
|
||||
mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url))
|
||||
|
||||
def process_formats(info):
|
||||
# Downloads with browser's User-Agent are rate limited. Working around
|
||||
|
|
160
yt_dlp/extractor/kenh14.py
Normal file
160
yt_dlp/extractor/kenh14.py
Normal file
|
@ -0,0 +1,160 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
extract_attributes,
|
||||
get_element_by_class,
|
||||
get_element_html_by_attribute,
|
||||
get_elements_html_by_class,
|
||||
int_or_none,
|
||||
parse_duration,
|
||||
parse_iso8601,
|
||||
remove_start,
|
||||
strip_or_none,
|
||||
unescapeHTML,
|
||||
update_url,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class Kenh14VideoIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://video\.kenh14\.vn/(?:video/)?[\w-]+-(?P<id>[0-9]+)\.chn'
|
||||
_TESTS = [{
|
||||
'url': 'https://video.kenh14.vn/video/mo-hop-iphone-14-pro-max-nguon-unbox-therapy-316173.chn',
|
||||
'md5': '1ed67f9c3a1e74acf15db69590cf6210',
|
||||
'info_dict': {
|
||||
'id': '316173',
|
||||
'ext': 'mp4',
|
||||
'title': 'Video mở hộp iPhone 14 Pro Max (Nguồn: Unbox Therapy)',
|
||||
'description': 'Video mở hộp iPhone 14 Pro MaxVideo mở hộp iPhone 14 Pro Max (Nguồn: Unbox Therapy)',
|
||||
'thumbnail': r're:^https?://videothumbs\.mediacdn\.vn/.*\.jpg$',
|
||||
'tags': [],
|
||||
'uploader': 'Unbox Therapy',
|
||||
'upload_date': '20220517',
|
||||
'view_count': int,
|
||||
'duration': 722.86,
|
||||
'timestamp': 1652764468,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/video-316174.chn',
|
||||
'md5': '2b41877d2afaf4a3f487ceda8e5c7cbd',
|
||||
'info_dict': {
|
||||
'id': '316174',
|
||||
'ext': 'mp4',
|
||||
'title': 'Khoảnh khắc VĐV nằm gục khóc sau chiến thắng: 7 năm trời Việt Nam mới có HCV kiếm chém nữ, chỉ có 8 tháng để khổ luyện trước khi lên sàn đấu',
|
||||
'description': 'md5:de86aa22e143e2b277bce8ec9c6f17dc',
|
||||
'thumbnail': r're:^https?://videothumbs\.mediacdn\.vn/.*\.jpg$',
|
||||
'tags': [],
|
||||
'upload_date': '20220517',
|
||||
'view_count': int,
|
||||
'duration': 70.04,
|
||||
'timestamp': 1652766021,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/0-344740.chn',
|
||||
'md5': 'b843495d5e728142c8870c09b46df2a9',
|
||||
'info_dict': {
|
||||
'id': '344740',
|
||||
'ext': 'mov',
|
||||
'title': 'Kỳ Duyên đầy căng thẳng trong buổi ra quân đi Miss Universe, nghi thức tuyên thuệ lần đầu xuất hiện gây nhiều tranh cãi',
|
||||
'description': 'md5:2a2dbb4a7397169fb21ee68f09160497',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.jpg$',
|
||||
'tags': ['kỳ duyên', 'Kỳ Duyên tuyên thuệ', 'miss universe'],
|
||||
'uploader': 'Quang Vũ',
|
||||
'upload_date': '20241024',
|
||||
'view_count': int,
|
||||
'duration': 198.88,
|
||||
'timestamp': 1729741590,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
attrs = extract_attributes(get_element_html_by_attribute('type', 'VideoStream', webpage) or '')
|
||||
direct_url = attrs['data-vid']
|
||||
|
||||
metadata = self._download_json(
|
||||
'https://api.kinghub.vn/video/api/v1/detailVideoByGet?FileName={}'.format(
|
||||
remove_start(direct_url, 'kenh14cdn.com/')), video_id, fatal=False)
|
||||
|
||||
formats = [{'url': f'https://{direct_url}', 'format_id': 'http', 'quality': 1}]
|
||||
subtitles = {}
|
||||
video_data = self._download_json(
|
||||
f'https://{direct_url}.json', video_id, note='Downloading video data', fatal=False)
|
||||
if hls_url := traverse_obj(video_data, ('hls', {url_or_none})):
|
||||
fmts, subs = self._extract_m3u8_formats_and_subtitles(
|
||||
hls_url, video_id, m3u8_id='hls', fatal=False)
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
if dash_url := traverse_obj(video_data, ('mpd', {url_or_none})):
|
||||
fmts, subs = self._extract_mpd_formats_and_subtitles(
|
||||
dash_url, video_id, mpd_id='dash', fatal=False)
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
|
||||
return {
|
||||
**traverse_obj(metadata, {
|
||||
'duration': ('duration', {parse_duration}),
|
||||
'uploader': ('author', {strip_or_none}),
|
||||
'timestamp': ('uploadtime', {parse_iso8601(delimiter=' ')}),
|
||||
'view_count': ('views', {int_or_none}),
|
||||
}),
|
||||
'id': video_id,
|
||||
'title': (
|
||||
traverse_obj(metadata, ('title', {strip_or_none}))
|
||||
or clean_html(self._og_search_title(webpage))
|
||||
or clean_html(get_element_by_class('vdbw-title', webpage))),
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
'description': (
|
||||
clean_html(self._og_search_description(webpage))
|
||||
or clean_html(get_element_by_class('vdbw-sapo', webpage))),
|
||||
'thumbnail': (self._og_search_thumbnail(webpage) or attrs.get('data-thumb')),
|
||||
'tags': traverse_obj(self._html_search_meta('keywords', webpage), (
|
||||
{lambda x: x.split(';')}, ..., filter)),
|
||||
}
|
||||
|
||||
|
||||
class Kenh14PlaylistIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://video\.kenh14\.vn/playlist/[\w-]+-(?P<id>[0-9]+)\.chn'
|
||||
_TESTS = [{
|
||||
'url': 'https://video.kenh14.vn/playlist/tran-tinh-naked-love-mua-2-71.chn',
|
||||
'info_dict': {
|
||||
'id': '71',
|
||||
'title': 'Trần Tình (Naked love) mùa 2',
|
||||
'description': 'md5:e9522339304956dea931722dd72eddb2',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.png$',
|
||||
},
|
||||
'playlist_count': 9,
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/playlist/0-72.chn',
|
||||
'info_dict': {
|
||||
'id': '72',
|
||||
'title': 'Lau Lại Đầu Từ',
|
||||
'description': 'Cùng xem xưa và nay có gì khác biệt nhé!',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.png$',
|
||||
},
|
||||
'playlist_count': 6,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, playlist_id)
|
||||
|
||||
category_detail = get_element_by_class('category-detail', webpage) or ''
|
||||
embed_info = traverse_obj(
|
||||
self._yield_json_ld(webpage, playlist_id),
|
||||
(lambda _, v: v['name'] and v['alternateName'], any)) or {}
|
||||
|
||||
return self.playlist_from_matches(
|
||||
get_elements_html_by_class('video-item', webpage), playlist_id,
|
||||
(clean_html(get_element_by_class('name', category_detail)) or unescapeHTML(embed_info.get('name'))),
|
||||
getter=lambda x: 'https://video.kenh14.vn/video/video-{}.chn'.format(extract_attributes(x)['data-id']),
|
||||
ie=Kenh14VideoIE, playlist_description=(
|
||||
clean_html(get_element_by_class('description', category_detail))
|
||||
or unescapeHTML(embed_info.get('alternateName'))),
|
||||
thumbnail=traverse_obj(
|
||||
self._og_search_thumbnail(webpage),
|
||||
({url_or_none}, {update_url(query=None)})))
|
|
@ -1,30 +1,32 @@
|
|||
import json
|
||||
import uuid
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
smuggle_url,
|
||||
traverse_obj,
|
||||
try_call,
|
||||
unsmuggle_url,
|
||||
urljoin,
|
||||
)
|
||||
|
||||
|
||||
class LiTVIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:vod|promo)/[^/]+/(?:content\.do)?\?.*?\b(?:content_)?id=(?P<id>[^&]+)'
|
||||
|
||||
_URL_TEMPLATE = 'https://www.litv.tv/vod/%s/content.do?content_id=%s'
|
||||
|
||||
_VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:[^/?#]+/watch/|vod/[^/?#]+/content\.do\?content_id=)(?P<id>[\w-]+)'
|
||||
_URL_TEMPLATE = 'https://www.litv.tv/%s/watch/%s'
|
||||
_GEO_COUNTRIES = ['TW']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1',
|
||||
'url': 'https://www.litv.tv/drama/watch/VOD00041610',
|
||||
'info_dict': {
|
||||
'id': 'VOD00041606',
|
||||
'title': '花千骨',
|
||||
},
|
||||
'playlist_count': 51, # 50 episodes + 1 trailer
|
||||
}, {
|
||||
'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1',
|
||||
'url': 'https://www.litv.tv/drama/watch/VOD00041610',
|
||||
'md5': 'b90ff1e9f1d8f5cfcd0a44c3e2b34c7a',
|
||||
'info_dict': {
|
||||
'id': 'VOD00041610',
|
||||
|
@ -32,16 +34,15 @@ class LiTVIE(InfoExtractor):
|
|||
'title': '花千骨第1集',
|
||||
'thumbnail': r're:https?://.*\.jpg$',
|
||||
'description': '《花千骨》陸劇線上看。十六年前,平靜的村莊內,一名女嬰隨異相出生,途徑此地的蜀山掌門清虛道長算出此女命運非同一般,她體內散發的異香易招惹妖魔。一念慈悲下,他在村莊周邊設下結界阻擋妖魔入侵,讓其年滿十六後去蜀山,並賜名花千骨。',
|
||||
'categories': ['奇幻', '愛情', '中國', '仙俠'],
|
||||
'categories': ['奇幻', '愛情', '仙俠', '古裝'],
|
||||
'episode': 'Episode 1',
|
||||
'episode_number': 1,
|
||||
},
|
||||
'params': {
|
||||
'noplaylist': True,
|
||||
},
|
||||
'skip': 'Georestricted to Taiwan',
|
||||
}, {
|
||||
'url': 'https://www.litv.tv/promo/miyuezhuan/?content_id=VOD00044841&',
|
||||
'url': 'https://www.litv.tv/drama/watch/VOD00044841',
|
||||
'md5': '88322ea132f848d6e3e18b32a832b918',
|
||||
'info_dict': {
|
||||
'id': 'VOD00044841',
|
||||
|
@ -55,94 +56,62 @@ class LiTVIE(InfoExtractor):
|
|||
def _extract_playlist(self, playlist_data, content_type):
|
||||
all_episodes = [
|
||||
self.url_result(smuggle_url(
|
||||
self._URL_TEMPLATE % (content_type, episode['contentId']),
|
||||
self._URL_TEMPLATE % (content_type, episode['content_id']),
|
||||
{'force_noplaylist': True})) # To prevent infinite recursion
|
||||
for episode in traverse_obj(playlist_data, ('seasons', ..., 'episode', lambda _, v: v['contentId']))]
|
||||
for episode in traverse_obj(playlist_data, ('seasons', ..., 'episodes', lambda _, v: v['content_id']))]
|
||||
|
||||
return self.playlist_result(all_episodes, playlist_data['contentId'], playlist_data.get('title'))
|
||||
return self.playlist_result(all_episodes, playlist_data['content_id'], playlist_data.get('title'))
|
||||
|
||||
def _real_extract(self, url):
|
||||
url, smuggled_data = unsmuggle_url(url, {})
|
||||
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
vod_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps']
|
||||
|
||||
if self._search_regex(
|
||||
r'(?i)<meta\s[^>]*http-equiv="refresh"\s[^>]*content="[0-9]+;\s*url=https://www\.litv\.tv/"',
|
||||
webpage, 'meta refresh redirect', default=False, group=0):
|
||||
raise ExtractorError('No such content found', expected=True)
|
||||
program_info = traverse_obj(vod_data, ('programInformation', {dict})) or {}
|
||||
playlist_data = traverse_obj(vod_data, ('seriesTree'))
|
||||
if playlist_data and self._yes_playlist(program_info.get('series_id'), video_id, smuggled_data):
|
||||
return self._extract_playlist(playlist_data, program_info.get('content_type'))
|
||||
|
||||
program_info = self._parse_json(self._search_regex(
|
||||
r'var\s+programInfo\s*=\s*([^;]+)', webpage, 'VOD data', default='{}'),
|
||||
video_id)
|
||||
asset_id = traverse_obj(program_info, ('assets', 0, 'asset_id', {str}))
|
||||
if asset_id: # This is a VOD
|
||||
media_type = 'vod'
|
||||
else: # This is a live stream
|
||||
asset_id = program_info['content_id']
|
||||
media_type = program_info['content_type']
|
||||
puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value)
|
||||
if puid:
|
||||
endpoint = 'get-urls'
|
||||
else:
|
||||
puid = str(uuid.uuid4())
|
||||
endpoint = 'get-urls-no-auth'
|
||||
video_data = self._download_json(
|
||||
f'https://www.litv.tv/api/{endpoint}', video_id,
|
||||
data=json.dumps({'AssetId': asset_id, 'MediaType': media_type, 'puid': puid}).encode(),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
|
||||
# In browsers `getProgramInfo` request is always issued. Usually this
|
||||
# endpoint gives the same result as the data embedded in the webpage.
|
||||
# If, for some reason, there are no embedded data, we do an extra request.
|
||||
if 'assetId' not in program_info:
|
||||
program_info = self._download_json(
|
||||
'https://www.litv.tv/vod/ajax/getProgramInfo', video_id,
|
||||
query={'contentId': video_id},
|
||||
headers={'Accept': 'application/json'})
|
||||
|
||||
series_id = program_info['seriesId']
|
||||
if self._yes_playlist(series_id, video_id, smuggled_data):
|
||||
playlist_data = self._download_json(
|
||||
'https://www.litv.tv/vod/ajax/getSeriesTree', video_id,
|
||||
query={'seriesId': series_id}, headers={'Accept': 'application/json'})
|
||||
return self._extract_playlist(playlist_data, program_info['contentType'])
|
||||
|
||||
video_data = self._parse_json(self._search_regex(
|
||||
r'uiHlsUrl\s*=\s*testBackendData\(([^;]+)\);',
|
||||
webpage, 'video data', default='{}'), video_id)
|
||||
if not video_data:
|
||||
payload = {'assetId': program_info['assetId']}
|
||||
puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value)
|
||||
if puid:
|
||||
payload.update({
|
||||
'type': 'auth',
|
||||
'puid': puid,
|
||||
})
|
||||
endpoint = 'getUrl'
|
||||
else:
|
||||
payload.update({
|
||||
'watchDevices': program_info['watchDevices'],
|
||||
'contentType': program_info['contentType'],
|
||||
})
|
||||
endpoint = 'getMainUrlNoAuth'
|
||||
video_data = self._download_json(
|
||||
f'https://www.litv.tv/vod/ajax/{endpoint}', video_id,
|
||||
data=json.dumps(payload).encode(),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
|
||||
if not video_data.get('fullpath'):
|
||||
error_msg = video_data.get('errorMessage')
|
||||
if error_msg == 'vod.error.outsideregionerror':
|
||||
if error := traverse_obj(video_data, ('error', {dict})):
|
||||
error_msg = traverse_obj(error, ('message', {str}))
|
||||
if error_msg and 'OutsideRegionError' in error_msg:
|
||||
self.raise_geo_restricted('This video is available in Taiwan only')
|
||||
if error_msg:
|
||||
elif error_msg:
|
||||
raise ExtractorError(f'{self.IE_NAME} said: {error_msg}', expected=True)
|
||||
raise ExtractorError(f'Unexpected result from {self.IE_NAME}')
|
||||
raise ExtractorError(f'Unexpected error from {self.IE_NAME}')
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
video_data['fullpath'], video_id, ext='mp4',
|
||||
entry_protocol='m3u8_native', m3u8_id='hls')
|
||||
video_data['result']['AssetURLs'][0], video_id, ext='mp4', m3u8_id='hls')
|
||||
for a_format in formats:
|
||||
# LiTV HLS segments doesn't like compressions
|
||||
a_format.setdefault('http_headers', {})['Accept-Encoding'] = 'identity'
|
||||
|
||||
title = program_info['title'] + program_info.get('secondaryMark', '')
|
||||
description = program_info.get('description')
|
||||
thumbnail = program_info.get('imageFile')
|
||||
categories = [item['name'] for item in program_info.get('category', [])]
|
||||
episode = int_or_none(program_info.get('episode'))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'formats': formats,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'thumbnail': thumbnail,
|
||||
'categories': categories,
|
||||
'episode_number': episode,
|
||||
'title': join_nonempty('title', 'secondary_mark', delim='', from_dict=program_info),
|
||||
**traverse_obj(program_info, {
|
||||
'description': ('description', {str}),
|
||||
'thumbnail': ('picture', {urljoin('https://p-cdnstatic.svc.litv.tv/')}),
|
||||
'categories': ('genres', ..., 'name', {str}),
|
||||
'episode_number': ('episode', {int_or_none}),
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -1,291 +0,0 @@
|
|||
import functools
|
||||
import json
|
||||
import uuid
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
OnDemandPagedList,
|
||||
determine_ext,
|
||||
dict_get,
|
||||
float_or_none,
|
||||
traverse_obj,
|
||||
)
|
||||
|
||||
|
||||
class MildomBaseIE(InfoExtractor):
|
||||
_GUEST_ID = None
|
||||
|
||||
def _call_api(self, url, video_id, query=None, note='Downloading JSON metadata', body=None):
|
||||
if not self._GUEST_ID:
|
||||
self._GUEST_ID = f'pc-gp-{uuid.uuid4()}'
|
||||
|
||||
content = self._download_json(
|
||||
url, video_id, note=note, data=json.dumps(body).encode() if body else None,
|
||||
headers={'Content-Type': 'application/json'} if body else {},
|
||||
query={
|
||||
'__guest_id': self._GUEST_ID,
|
||||
'__platform': 'web',
|
||||
**(query or {}),
|
||||
})
|
||||
|
||||
if content['code'] != 0:
|
||||
raise ExtractorError(
|
||||
f'Mildom says: {content["message"]} (code {content["code"]})',
|
||||
expected=True)
|
||||
return content['body']
|
||||
|
||||
|
||||
class MildomIE(MildomBaseIE):
|
||||
IE_NAME = 'mildom'
|
||||
IE_DESC = 'Record ongoing live by specific user in Mildom'
|
||||
_VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/(?P<id>\d+)'
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(f'https://www.mildom.com/{video_id}', video_id)
|
||||
|
||||
enterstudio = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/gappserv/live/enterstudio', video_id,
|
||||
note='Downloading live metadata', query={'user_id': video_id})
|
||||
result_video_id = enterstudio.get('log_id', video_id)
|
||||
|
||||
servers = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/gappserv/live/liveserver', result_video_id,
|
||||
note='Downloading live server list', query={
|
||||
'user_id': video_id,
|
||||
'live_server_type': 'hls',
|
||||
})
|
||||
|
||||
playback_token = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/gappserv/live/token', result_video_id,
|
||||
note='Obtaining live playback token', body={'host_id': video_id, 'type': 'hls'})
|
||||
playback_token = traverse_obj(playback_token, ('data', ..., 'token'), get_all=False)
|
||||
if not playback_token:
|
||||
raise ExtractorError('Failed to obtain live playback token')
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
f'{servers["stream_server"]}/{video_id}_master.m3u8?{playback_token}',
|
||||
result_video_id, 'mp4', headers={
|
||||
'Referer': 'https://www.mildom.com/',
|
||||
'Origin': 'https://www.mildom.com',
|
||||
})
|
||||
|
||||
for fmt in formats:
|
||||
fmt.setdefault('http_headers', {})['Referer'] = 'https://www.mildom.com/'
|
||||
|
||||
return {
|
||||
'id': result_video_id,
|
||||
'title': self._html_search_meta('twitter:description', webpage, default=None) or traverse_obj(enterstudio, 'anchor_intro'),
|
||||
'description': traverse_obj(enterstudio, 'intro', 'live_intro', expected_type=str),
|
||||
'timestamp': float_or_none(enterstudio.get('live_start_ms'), scale=1000),
|
||||
'uploader': self._html_search_meta('twitter:title', webpage, default=None) or traverse_obj(enterstudio, 'loginname'),
|
||||
'uploader_id': video_id,
|
||||
'formats': formats,
|
||||
'is_live': True,
|
||||
}
|
||||
|
||||
|
||||
class MildomVodIE(MildomBaseIE):
|
||||
IE_NAME = 'mildom:vod'
|
||||
IE_DESC = 'VOD in Mildom'
|
||||
_VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/playback/(?P<user_id>\d+)/(?P<id>(?P=user_id)-[a-zA-Z0-9]+-?[0-9]*)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.mildom.com/playback/10882672/10882672-1597662269',
|
||||
'info_dict': {
|
||||
'id': '10882672-1597662269',
|
||||
'ext': 'mp4',
|
||||
'title': '始めてのミルダム配信じゃぃ!',
|
||||
'thumbnail': r're:^https?://.*\.(png|jpg)$',
|
||||
'upload_date': '20200817',
|
||||
'duration': 4138.37,
|
||||
'description': 'ゲームをしたくて!',
|
||||
'timestamp': 1597662269.0,
|
||||
'uploader_id': '10882672',
|
||||
'uploader': 'kson組長(けいそん)',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.mildom.com/playback/10882672/10882672-1597758589870-477',
|
||||
'info_dict': {
|
||||
'id': '10882672-1597758589870-477',
|
||||
'ext': 'mp4',
|
||||
'title': '【kson】感染メイズ!麻酔銃で無双する',
|
||||
'thumbnail': r're:^https?://.*\.(png|jpg)$',
|
||||
'timestamp': 1597759093.0,
|
||||
'uploader': 'kson組長(けいそん)',
|
||||
'duration': 4302.58,
|
||||
'uploader_id': '10882672',
|
||||
'description': 'このステージ絶対乗り越えたい',
|
||||
'upload_date': '20200818',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.mildom.com/playback/10882672/10882672-buha9td2lrn97fk2jme0',
|
||||
'info_dict': {
|
||||
'id': '10882672-buha9td2lrn97fk2jme0',
|
||||
'ext': 'mp4',
|
||||
'title': '【kson組長】CART RACER!!!',
|
||||
'thumbnail': r're:^https?://.*\.(png|jpg)$',
|
||||
'uploader_id': '10882672',
|
||||
'uploader': 'kson組長(けいそん)',
|
||||
'upload_date': '20201104',
|
||||
'timestamp': 1604494797.0,
|
||||
'duration': 4657.25,
|
||||
'description': 'WTF',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
|
||||
webpage = self._download_webpage(f'https://www.mildom.com/playback/{user_id}/{video_id}', video_id)
|
||||
|
||||
autoplay = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/videocontent/playback/getPlaybackDetail', video_id,
|
||||
note='Downloading playback metadata', query={
|
||||
'v_id': video_id,
|
||||
})['playback']
|
||||
|
||||
formats = [{
|
||||
'url': autoplay['audio_url'],
|
||||
'format_id': 'audio',
|
||||
'protocol': 'm3u8_native',
|
||||
'vcodec': 'none',
|
||||
'acodec': 'aac',
|
||||
'ext': 'm4a',
|
||||
}]
|
||||
for fmt in autoplay['video_link']:
|
||||
formats.append({
|
||||
'format_id': 'video-{}'.format(fmt['name']),
|
||||
'url': fmt['url'],
|
||||
'protocol': 'm3u8_native',
|
||||
'width': fmt['level'] * autoplay['video_width'] // autoplay['video_height'],
|
||||
'height': fmt['level'],
|
||||
'vcodec': 'h264',
|
||||
'acodec': 'aac',
|
||||
'ext': 'mp4',
|
||||
})
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': self._html_search_meta(('og:description', 'description'), webpage, default=None) or autoplay.get('title'),
|
||||
'description': traverse_obj(autoplay, 'video_intro'),
|
||||
'timestamp': float_or_none(autoplay.get('publish_time'), scale=1000),
|
||||
'duration': float_or_none(autoplay.get('video_length'), scale=1000),
|
||||
'thumbnail': dict_get(autoplay, ('upload_pic', 'video_pic')),
|
||||
'uploader': traverse_obj(autoplay, ('author_info', 'login_name')),
|
||||
'uploader_id': user_id,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
|
||||
class MildomClipIE(MildomBaseIE):
|
||||
IE_NAME = 'mildom:clip'
|
||||
IE_DESC = 'Clip in Mildom'
|
||||
_VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/clip/(?P<id>(?P<user_id>\d+)-[a-zA-Z0-9]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.mildom.com/clip/10042245-63921673e7b147ebb0806d42b5ba5ce9',
|
||||
'info_dict': {
|
||||
'id': '10042245-63921673e7b147ebb0806d42b5ba5ce9',
|
||||
'title': '全然違ったよ',
|
||||
'timestamp': 1619181890,
|
||||
'duration': 59,
|
||||
'thumbnail': r're:https?://.+',
|
||||
'uploader': 'ざきんぽ',
|
||||
'uploader_id': '10042245',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.mildom.com/clip/10111524-ebf4036e5aa8411c99fb3a1ae0902864',
|
||||
'info_dict': {
|
||||
'id': '10111524-ebf4036e5aa8411c99fb3a1ae0902864',
|
||||
'title': 'かっこいい',
|
||||
'timestamp': 1621094003,
|
||||
'duration': 59,
|
||||
'thumbnail': r're:https?://.+',
|
||||
'uploader': '(ルーキー',
|
||||
'uploader_id': '10111524',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.mildom.com/clip/10660174-2c539e6e277c4aaeb4b1fbe8d22cb902',
|
||||
'info_dict': {
|
||||
'id': '10660174-2c539e6e277c4aaeb4b1fbe8d22cb902',
|
||||
'title': 'あ',
|
||||
'timestamp': 1614769431,
|
||||
'duration': 31,
|
||||
'thumbnail': r're:https?://.+',
|
||||
'uploader': 'ドルゴルスレンギーン=ダグワドルジ',
|
||||
'uploader_id': '10660174',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
|
||||
webpage = self._download_webpage(f'https://www.mildom.com/clip/{video_id}', video_id)
|
||||
|
||||
clip_detail = self._call_api(
|
||||
'https://cloudac-cf-jp.mildom.com/nonolive/videocontent/clip/detail', video_id,
|
||||
note='Downloading playback metadata', query={
|
||||
'clip_id': video_id,
|
||||
})
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': self._html_search_meta(
|
||||
('og:description', 'description'), webpage, default=None) or clip_detail.get('title'),
|
||||
'timestamp': float_or_none(clip_detail.get('create_time')),
|
||||
'duration': float_or_none(clip_detail.get('length')),
|
||||
'thumbnail': clip_detail.get('cover'),
|
||||
'uploader': traverse_obj(clip_detail, ('user_info', 'loginname')),
|
||||
'uploader_id': user_id,
|
||||
|
||||
'url': clip_detail['url'],
|
||||
'ext': determine_ext(clip_detail.get('url'), 'mp4'),
|
||||
}
|
||||
|
||||
|
||||
class MildomUserVodIE(MildomBaseIE):
|
||||
IE_NAME = 'mildom:user:vod'
|
||||
IE_DESC = 'Download all VODs from specific user in Mildom'
|
||||
_VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/profile/(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.mildom.com/profile/10093333',
|
||||
'info_dict': {
|
||||
'id': '10093333',
|
||||
'title': 'Uploads from ねこばたけ',
|
||||
},
|
||||
'playlist_mincount': 732,
|
||||
}, {
|
||||
'url': 'https://www.mildom.com/profile/10882672',
|
||||
'info_dict': {
|
||||
'id': '10882672',
|
||||
'title': 'Uploads from kson組長(けいそん)',
|
||||
},
|
||||
'playlist_mincount': 201,
|
||||
}]
|
||||
|
||||
def _fetch_page(self, user_id, page):
|
||||
page += 1
|
||||
reply = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/videocontent/profile/playbackList',
|
||||
user_id, note=f'Downloading page {page}', query={
|
||||
'user_id': user_id,
|
||||
'page': page,
|
||||
'limit': '30',
|
||||
})
|
||||
if not reply:
|
||||
return
|
||||
for x in reply:
|
||||
v_id = x.get('v_id')
|
||||
if not v_id:
|
||||
continue
|
||||
yield self.url_result(f'https://www.mildom.com/playback/{user_id}/{v_id}')
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id = self._match_id(url)
|
||||
self.to_screen(f'This will download all VODs belonging to user. To download ongoing live video, use "https://www.mildom.com/{user_id}" instead')
|
||||
|
||||
profile = self._call_api(
|
||||
'https://cloudac.mildom.com/nonolive/gappserv/user/profileV2', user_id,
|
||||
query={'user_id': user_id}, note='Downloading user profile')['user_info']
|
||||
|
||||
return self.playlist_result(
|
||||
OnDemandPagedList(functools.partial(self._fetch_page, user_id), 30),
|
||||
user_id, f'Uploads from {profile["loginname"]}')
|
|
@ -16,10 +16,10 @@
|
|||
parse_iso8601,
|
||||
smuggle_url,
|
||||
str_or_none,
|
||||
traverse_obj,
|
||||
url_or_none,
|
||||
urljoin,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj, value
|
||||
|
||||
|
||||
class PatreonBaseIE(InfoExtractor):
|
||||
|
@ -252,6 +252,27 @@ class PatreonIE(PatreonBaseIE):
|
|||
'thumbnail': r're:^https?://.+',
|
||||
},
|
||||
'skip': 'Patron-only content',
|
||||
}, {
|
||||
# Contains a comment reply in the 'included' section
|
||||
'url': 'https://www.patreon.com/posts/114721679',
|
||||
'info_dict': {
|
||||
'id': '114721679',
|
||||
'ext': 'mp4',
|
||||
'upload_date': '20241025',
|
||||
'uploader': 'Japanalysis',
|
||||
'like_count': int,
|
||||
'thumbnail': r're:^https?://.+',
|
||||
'comment_count': int,
|
||||
'title': 'Karasawa Part 2',
|
||||
'description': 'Part 2 of this video https://www.youtube.com/watch?v=Azms2-VTASk',
|
||||
'uploader_url': 'https://www.patreon.com/japanalysis',
|
||||
'uploader_id': '80504268',
|
||||
'channel_url': 'https://www.patreon.com/japanalysis',
|
||||
'channel_follower_count': int,
|
||||
'timestamp': 1729897015,
|
||||
'channel_id': '9346307',
|
||||
},
|
||||
'params': {'getcomments': True},
|
||||
}]
|
||||
_RETURN_TYPE = 'video'
|
||||
|
||||
|
@ -404,26 +425,24 @@ def _get_comments(self, post_id):
|
|||
f'posts/{post_id}/comments', post_id, query=params, note=f'Downloading comments page {page}')
|
||||
|
||||
cursor = None
|
||||
for comment in traverse_obj(response, (('data', ('included', lambda _, v: v['type'] == 'comment')), ...)):
|
||||
for comment in traverse_obj(response, (('data', 'included'), lambda _, v: v['type'] == 'comment' and v['id'])):
|
||||
count += 1
|
||||
comment_id = comment.get('id')
|
||||
attributes = comment.get('attributes') or {}
|
||||
if comment_id is None:
|
||||
continue
|
||||
author_id = traverse_obj(comment, ('relationships', 'commenter', 'data', 'id'))
|
||||
author_info = traverse_obj(
|
||||
response, ('included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes'),
|
||||
get_all=False, expected_type=dict, default={})
|
||||
|
||||
yield {
|
||||
'id': comment_id,
|
||||
'text': attributes.get('body'),
|
||||
'timestamp': parse_iso8601(attributes.get('created')),
|
||||
'parent': traverse_obj(comment, ('relationships', 'parent', 'data', 'id'), default='root'),
|
||||
'author_is_uploader': attributes.get('is_by_creator'),
|
||||
**traverse_obj(comment, {
|
||||
'id': ('id', {str_or_none}),
|
||||
'text': ('attributes', 'body', {str}),
|
||||
'timestamp': ('attributes', 'created', {parse_iso8601}),
|
||||
'parent': ('relationships', 'parent', 'data', ('id', {value('root')}), {str}, any),
|
||||
'author_is_uploader': ('attributes', 'is_by_creator', {bool}),
|
||||
}),
|
||||
**traverse_obj(response, (
|
||||
'included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes', {
|
||||
'author': ('full_name', {str}),
|
||||
'author_thumbnail': ('image_url', {url_or_none}),
|
||||
}), get_all=False),
|
||||
'author_id': author_id,
|
||||
'author': author_info.get('full_name'),
|
||||
'author_thumbnail': author_info.get('image_url'),
|
||||
}
|
||||
|
||||
if count < traverse_obj(response, ('meta', 'count')):
|
||||
|
|
122
yt_dlp/extractor/pialive.py
Normal file
122
yt_dlp/extractor/pialive.py
Normal file
|
@ -0,0 +1,122 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
clean_html,
|
||||
extract_attributes,
|
||||
get_element_by_class,
|
||||
get_element_html_by_class,
|
||||
multipart_encode,
|
||||
str_or_none,
|
||||
unified_timestamp,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class PiaLiveIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://player\.pia-live\.jp/stream/(?P<id>[\w-]+)'
|
||||
_PLAYER_ROOT_URL = 'https://player.pia-live.jp/'
|
||||
_PIA_LIVE_API_URL = 'https://api.pia-live.jp'
|
||||
_API_KEY = 'kfds)FKFps-dms9e'
|
||||
_TESTS = [{
|
||||
'url': 'https://player.pia-live.jp/stream/4JagFBEIM14s_hK9aXHKf3k3F3bY5eoHFQxu68TC6krUDqGOwN4d61dCWQYOd6CTxl4hjya9dsfEZGsM4uGOUdax60lEI4twsXGXf7crmz8Gk__GhupTrWxA7RFRVt76',
|
||||
'info_dict': {
|
||||
'id': '88f3109a-f503-4d0f-a9f7-9f39ac745d84',
|
||||
'display_id': '2431867_001',
|
||||
'title': 'こながめでたい日2024の視聴ページ | PIA LIVE STREAM(ぴあライブストリーム)',
|
||||
'live_status': 'was_live',
|
||||
'comment_count': int,
|
||||
},
|
||||
'params': {
|
||||
'getcomments': True,
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
'skip': 'The video is no longer available',
|
||||
}, {
|
||||
'url': 'https://player.pia-live.jp/stream/4JagFBEIM14s_hK9aXHKf3k3F3bY5eoHFQxu68TC6krJdu0GVBVbVy01IwpJ6J3qBEm3d9TCTt1d0eWpsZGj7DrOjVOmS7GAWGwyscMgiThopJvzgWC4H5b-7XQjAfRZ',
|
||||
'info_dict': {
|
||||
'id': '9ce8b8ba-f6d1-4d1f-83a0-18c3148ded93',
|
||||
'display_id': '2431867_002',
|
||||
'title': 'こながめでたい日2024の視聴ページ | PIA LIVE STREAM(ぴあライブストリーム)',
|
||||
'live_status': 'was_live',
|
||||
'comment_count': int,
|
||||
},
|
||||
'params': {
|
||||
'getcomments': True,
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
'skip': 'The video is no longer available',
|
||||
}]
|
||||
|
||||
def _extract_var(self, variable, html):
|
||||
return self._search_regex(
|
||||
rf'(?:var|const|let)\s+{variable}\s*=\s*(["\'])(?P<value>(?:(?!\1).)+)\1',
|
||||
html, f'variable {variable}', group='value')
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_key = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_key)
|
||||
|
||||
program_code = self._extract_var('programCode', webpage)
|
||||
article_code = self._extract_var('articleCode', webpage)
|
||||
title = self._html_extract_title(webpage)
|
||||
|
||||
if get_element_html_by_class('play-end', webpage):
|
||||
raise ExtractorError('The video is no longer available', expected=True, video_id=program_code)
|
||||
|
||||
if start_info := clean_html(get_element_by_class('play-waiting__date', webpage)):
|
||||
date, time = self._search_regex(
|
||||
r'(?P<date>\d{4}/\d{1,2}/\d{1,2})\([月火水木金土日]\)(?P<time>\d{2}:\d{2})',
|
||||
start_info, 'start_info', fatal=False, group=('date', 'time'))
|
||||
if date and time:
|
||||
release_timestamp_str = f'{date} {time} +09:00'
|
||||
release_timestamp = unified_timestamp(release_timestamp_str)
|
||||
self.raise_no_formats(f'The video will be available after {release_timestamp_str}', expected=True)
|
||||
return {
|
||||
'id': program_code,
|
||||
'title': title,
|
||||
'live_status': 'is_upcoming',
|
||||
'release_timestamp': release_timestamp,
|
||||
}
|
||||
|
||||
payload, content_type = multipart_encode({
|
||||
'play_url': video_key,
|
||||
'api_key': self._API_KEY,
|
||||
})
|
||||
api_data_and_headers = {
|
||||
'data': payload,
|
||||
'headers': {'Content-Type': content_type, 'Referer': self._PLAYER_ROOT_URL},
|
||||
}
|
||||
|
||||
player_tag_list = self._download_json(
|
||||
f'{self._PIA_LIVE_API_URL}/perf/player-tag-list/{program_code}', program_code,
|
||||
'Fetching player tag list', 'Unable to fetch player tag list', **api_data_and_headers)
|
||||
|
||||
return self.url_result(
|
||||
extract_attributes(player_tag_list['data']['movie_one_tag'])['src'],
|
||||
url_transparent=True, title=title, display_id=program_code,
|
||||
__post_extractor=self.extract_comments(program_code, article_code, api_data_and_headers))
|
||||
|
||||
def _get_comments(self, program_code, article_code, api_data_and_headers):
|
||||
chat_room_url = traverse_obj(self._download_json(
|
||||
f'{self._PIA_LIVE_API_URL}/perf/chat-tag-list/{program_code}/{article_code}', program_code,
|
||||
'Fetching chat info', 'Unable to fetch chat info', fatal=False, **api_data_and_headers),
|
||||
('data', 'chat_one_tag', {extract_attributes}, 'src', {url_or_none}))
|
||||
if not chat_room_url:
|
||||
return
|
||||
comment_page = self._download_webpage(
|
||||
chat_room_url, program_code, 'Fetching comment page', 'Unable to fetch comment page',
|
||||
fatal=False, headers={'Referer': self._PLAYER_ROOT_URL})
|
||||
if not comment_page:
|
||||
return
|
||||
yield from traverse_obj(self._search_json(
|
||||
r'var\s+_history\s*=', comment_page, 'comment list',
|
||||
program_code, contains_pattern=r'\[(?s:.+)\]', fatal=False), (..., {
|
||||
'timestamp': (0, {int}),
|
||||
'author_is_uploader': (1, {lambda x: x == 2}),
|
||||
'author': (2, {str}),
|
||||
'text': (3, {str}),
|
||||
'id': (4, {str_or_none}),
|
||||
}))
|
|
@ -1,70 +0,0 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
parse_qs,
|
||||
time_seconds,
|
||||
traverse_obj,
|
||||
)
|
||||
|
||||
|
||||
class PIAULIZAPortalIE(InfoExtractor):
|
||||
IE_DESC = 'ulizaportal.jp - PIA LIVE STREAM'
|
||||
_VALID_URL = r'https?://(?:www\.)?ulizaportal\.jp/pages/(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})'
|
||||
_TESTS = [{
|
||||
'url': 'https://ulizaportal.jp/pages/005f18b7-e810-5618-cb82-0987c5755d44',
|
||||
'info_dict': {
|
||||
'id': '005f18b7-e810-5618-cb82-0987c5755d44',
|
||||
'title': 'プレゼンテーションプレイヤーのサンプル',
|
||||
'live_status': 'not_live',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://ulizaportal.jp/pages/005e1b23-fe93-5780-19a0-98e917cc4b7d?expires=4102412400&signature=f422a993b683e1068f946caf406d211c17d1ef17da8bef3df4a519502155aa91&version=1',
|
||||
'info_dict': {
|
||||
'id': '005e1b23-fe93-5780-19a0-98e917cc4b7d',
|
||||
'title': '【確認用】視聴サンプルページ(ULIZA)',
|
||||
'live_status': 'not_live',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
expires = int_or_none(traverse_obj(parse_qs(url), ('expires', 0)))
|
||||
if expires and expires <= time_seconds():
|
||||
raise ExtractorError('The link is expired.', video_id=video_id, expected=True)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
player_data = self._download_webpage(
|
||||
self._search_regex(
|
||||
r'<script [^>]*\bsrc="(https://player-api\.p\.uliza\.jp/v1/players/[^"]+)"',
|
||||
webpage, 'player data url'),
|
||||
video_id, headers={'Referer': 'https://ulizaportal.jp/'},
|
||||
note='Fetching player data', errnote='Unable to fetch player data')
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
self._search_regex(
|
||||
r'["\'](https://vms-api\.p\.uliza\.jp/v1/prog-index\.m3u8[^"\']+)', player_data,
|
||||
'm3u8 url', default=None),
|
||||
video_id, fatal=False)
|
||||
m3u8_type = self._search_regex(
|
||||
r'/hls/(dvr|video)/', traverse_obj(formats, (0, 'url')), 'm3u8 type', default=None)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': self._html_extract_title(webpage),
|
||||
'formats': formats,
|
||||
'live_status': {
|
||||
'video': 'is_live',
|
||||
'dvr': 'was_live', # short-term archives
|
||||
}.get(m3u8_type, 'not_live'), # VOD or long-term archives
|
||||
}
|
|
@ -1,136 +0,0 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
extract_attributes,
|
||||
int_or_none,
|
||||
js_to_json,
|
||||
merge_dicts,
|
||||
)
|
||||
|
||||
|
||||
class PokemonIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?pokemon\.com/[a-z]{2}(?:.*?play=(?P<id>[a-z0-9]{32})|/(?:[^/]+/)+(?P<display_id>[^/?#&]+))'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.pokemon.com/us/pokemon-episodes/20_30-the-ol-raise-and-switch/',
|
||||
'md5': '2fe8eaec69768b25ef898cda9c43062e',
|
||||
'info_dict': {
|
||||
'id': 'afe22e30f01c41f49d4f1d9eab5cd9a4',
|
||||
'ext': 'mp4',
|
||||
'title': 'The Ol’ Raise and Switch!',
|
||||
'description': 'md5:7db77f7107f98ba88401d3adc80ff7af',
|
||||
},
|
||||
'add_id': ['LimelightMedia'],
|
||||
}, {
|
||||
# no data-video-title
|
||||
'url': 'https://www.pokemon.com/fr/episodes-pokemon/films-pokemon/pokemon-lascension-de-darkrai-2008',
|
||||
'info_dict': {
|
||||
'id': 'dfbaf830d7e54e179837c50c0c6cc0e1',
|
||||
'ext': 'mp4',
|
||||
'title': "Pokémon : L'ascension de Darkrai",
|
||||
'description': 'md5:d1dbc9e206070c3e14a06ff557659fb5',
|
||||
},
|
||||
'add_id': ['LimelightMedia'],
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.pokemon.com/uk/pokemon-episodes/?play=2e8b5c761f1d4a9286165d7748c1ece2',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.pokemon.com/fr/episodes-pokemon/18_09-un-hiver-inattendu/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.pokemon.com/de/pokemon-folgen/01_20-bye-bye-smettbo/',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, display_id = self._match_valid_url(url).groups()
|
||||
webpage = self._download_webpage(url, video_id or display_id)
|
||||
video_data = extract_attributes(self._search_regex(
|
||||
r'(<[^>]+data-video-id="{}"[^>]*>)'.format(video_id if video_id else '[a-z0-9]{32}'),
|
||||
webpage, 'video data element'))
|
||||
video_id = video_data['data-video-id']
|
||||
title = video_data.get('data-video-title') or self._html_search_meta(
|
||||
'pkm-title', webpage, ' title', default=None) or self._search_regex(
|
||||
r'<h1[^>]+\bclass=["\']us-title[^>]+>([^<]+)', webpage, 'title')
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'id': video_id,
|
||||
'url': f'limelight:media:{video_id}',
|
||||
'title': title,
|
||||
'description': video_data.get('data-video-summary'),
|
||||
'thumbnail': video_data.get('data-video-poster'),
|
||||
'series': 'Pokémon',
|
||||
'season_number': int_or_none(video_data.get('data-video-season')),
|
||||
'episode': title,
|
||||
'episode_number': int_or_none(video_data.get('data-video-episode')),
|
||||
'ie_key': 'LimelightMedia',
|
||||
}
|
||||
|
||||
|
||||
class PokemonWatchIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://watch\.pokemon\.com/[a-z]{2}-[a-z]{2}/(?:#/)?player(?:\.html)?\?id=(?P<id>[a-z0-9]{32})'
|
||||
_API_URL = 'https://www.pokemon.com/api/pokemontv/v2/channels/{0:}'
|
||||
_TESTS = [{
|
||||
'url': 'https://watch.pokemon.com/en-us/player.html?id=8309a40969894a8e8d5bc1311e9c5667',
|
||||
'md5': '62833938a31e61ab49ada92f524c42ff',
|
||||
'info_dict': {
|
||||
'id': '8309a40969894a8e8d5bc1311e9c5667',
|
||||
'ext': 'mp4',
|
||||
'title': 'Lillier and the Staff!',
|
||||
'description': 'md5:338841b8c21b283d24bdc9b568849f04',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://watch.pokemon.com/en-us/#/player?id=3fe7752ba09141f0b0f7756d1981c6b2',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://watch.pokemon.com/de-de/player.html?id=b3c402e111a4459eb47e12160ab0ba07',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _extract_media(self, channel_array, video_id):
|
||||
for channel in channel_array:
|
||||
for media in channel.get('media'):
|
||||
if media.get('id') == video_id:
|
||||
return media
|
||||
return None
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
info = {
|
||||
'_type': 'url',
|
||||
'id': video_id,
|
||||
'url': f'limelight:media:{video_id}',
|
||||
'ie_key': 'LimelightMedia',
|
||||
}
|
||||
|
||||
# API call can be avoided entirely if we are listing formats
|
||||
if self.get_param('listformats', False):
|
||||
return info
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
build_vars = self._parse_json(self._search_regex(
|
||||
r'(?s)buildVars\s*=\s*({.*?})', webpage, 'build vars'),
|
||||
video_id, transform_source=js_to_json)
|
||||
region = build_vars.get('region')
|
||||
channel_array = self._download_json(self._API_URL.format(region), video_id)
|
||||
video_data = self._extract_media(channel_array, video_id)
|
||||
|
||||
if video_data is None:
|
||||
raise ExtractorError(
|
||||
f'Video {video_id} does not exist', expected=True)
|
||||
|
||||
info['_type'] = 'url_transparent'
|
||||
images = video_data.get('images')
|
||||
|
||||
return merge_dicts(info, {
|
||||
'title': video_data.get('title'),
|
||||
'description': video_data.get('description'),
|
||||
'thumbnail': images.get('medium') or images.get('small'),
|
||||
'series': 'Pokémon',
|
||||
'season_number': int_or_none(video_data.get('season')),
|
||||
'episode': video_data.get('title'),
|
||||
'episode_number': int_or_none(video_data.get('episode')),
|
||||
})
|
105
yt_dlp/extractor/radioradicale.py
Normal file
105
yt_dlp/extractor/radioradicale.py
Normal file
|
@ -0,0 +1,105 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import url_or_none
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class RadioRadicaleIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?radioradicale\.it/scheda/(?P<id>[0-9]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.radioradicale.it/scheda/471591',
|
||||
'md5': 'eb0fbe43a601f1a361cbd00f3c45af4a',
|
||||
'info_dict': {
|
||||
'id': '471591',
|
||||
'ext': 'mp4',
|
||||
'title': 'md5:e8fbb8de57011a3255db0beca69af73d',
|
||||
'description': 'md5:5e15a789a2fe4d67da8d1366996e89ef',
|
||||
'location': 'Napoli',
|
||||
'duration': 2852.0,
|
||||
'timestamp': 1459987200,
|
||||
'upload_date': '20160407',
|
||||
'thumbnail': 'https://www.radioradicale.it/photo400/0/0/9/0/1/00901768.jpg',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.radioradicale.it/scheda/742783/parlamento-riunito-in-seduta-comune-11a-della-xix-legislatura',
|
||||
'info_dict': {
|
||||
'id': '742783',
|
||||
'title': 'Parlamento riunito in seduta comune (11ª della XIX legislatura)',
|
||||
'description': '-) Votazione per l\'elezione di un giudice della Corte Costituzionale (nono scrutinio)',
|
||||
'location': 'CAMERA',
|
||||
'duration': 5868.0,
|
||||
'timestamp': 1730246400,
|
||||
'upload_date': '20241030',
|
||||
},
|
||||
'playlist': [{
|
||||
'md5': 'aa48de55dcc45478e4cd200f299aab7d',
|
||||
'info_dict': {
|
||||
'id': '742783-0',
|
||||
'ext': 'mp4',
|
||||
'title': 'Parlamento riunito in seduta comune (11ª della XIX legislatura)',
|
||||
},
|
||||
}, {
|
||||
'md5': 'be915c189c70ad2920e5810f32260ff5',
|
||||
'info_dict': {
|
||||
'id': '742783-1',
|
||||
'ext': 'mp4',
|
||||
'title': 'Parlamento riunito in seduta comune (11ª della XIX legislatura)',
|
||||
},
|
||||
}, {
|
||||
'md5': 'f0ee4047342baf8ed3128a8417ac5e0a',
|
||||
'info_dict': {
|
||||
'id': '742783-2',
|
||||
'ext': 'mp4',
|
||||
'title': 'Parlamento riunito in seduta comune (11ª della XIX legislatura)',
|
||||
},
|
||||
}],
|
||||
}]
|
||||
|
||||
def _entries(self, videos_info, page_id):
|
||||
for idx, video in enumerate(traverse_obj(
|
||||
videos_info, ('playlist', lambda _, v: v['sources']))):
|
||||
video_id = f'{page_id}-{idx}'
|
||||
formats = []
|
||||
subtitles = {}
|
||||
|
||||
for m3u8_url in traverse_obj(video, ('sources', ..., 'src', {url_or_none})):
|
||||
fmts, subs = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id)
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
for sub in traverse_obj(video, ('subtitles', ..., lambda _, v: url_or_none(v['src']))):
|
||||
self._merge_subtitles({sub.get('srclang') or 'und': [{
|
||||
'url': sub['src'],
|
||||
'name': sub.get('label'),
|
||||
}]}, target=subtitles)
|
||||
|
||||
yield {
|
||||
'id': video_id,
|
||||
'title': video.get('title'),
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
page_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, page_id)
|
||||
|
||||
videos_info = self._search_json(
|
||||
r'jQuery\.extend\(Drupal\.settings\s*,',
|
||||
webpage, 'videos_info', page_id)['RRscheda']
|
||||
|
||||
entries = list(self._entries(videos_info, page_id))
|
||||
|
||||
common_info = {
|
||||
'id': page_id,
|
||||
'title': self._og_search_title(webpage),
|
||||
'description': self._og_search_description(webpage),
|
||||
'location': videos_info.get('luogo'),
|
||||
**self._search_json_ld(webpage, page_id),
|
||||
}
|
||||
|
||||
if len(entries) == 1:
|
||||
return {
|
||||
**entries[0],
|
||||
**common_info,
|
||||
}
|
||||
|
||||
return self.playlist_result(entries, multi_video=True, **common_info)
|
|
@ -259,6 +259,8 @@ def _real_extract(self, url):
|
|||
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, json.JSONDecodeError):
|
||||
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
|
||||
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
|
||||
self.raise_login_required('Account authentication is required')
|
||||
raise
|
||||
|
||||
|
|
|
@ -213,7 +213,7 @@ def _real_extract(self, url):
|
|||
class RedGifsUserIE(RedGifsBaseInfoExtractor):
|
||||
IE_DESC = 'Redgifs user'
|
||||
_VALID_URL = r'https?://(?:www\.)?redgifs\.com/users/(?P<username>[^/?#]+)(?:\?(?P<query>[^#]+))?'
|
||||
_PAGE_SIZE = 30
|
||||
_PAGE_SIZE = 80
|
||||
_TESTS = [
|
||||
{
|
||||
'url': 'https://www.redgifs.com/users/lamsinka89',
|
||||
|
@ -222,7 +222,7 @@ class RedGifsUserIE(RedGifsBaseInfoExtractor):
|
|||
'title': 'lamsinka89',
|
||||
'description': 'RedGifs user lamsinka89, ordered by recent',
|
||||
},
|
||||
'playlist_mincount': 100,
|
||||
'playlist_mincount': 391,
|
||||
},
|
||||
{
|
||||
'url': 'https://www.redgifs.com/users/lamsinka89?page=3',
|
||||
|
@ -231,7 +231,7 @@ class RedGifsUserIE(RedGifsBaseInfoExtractor):
|
|||
'title': 'lamsinka89',
|
||||
'description': 'RedGifs user lamsinka89, ordered by recent',
|
||||
},
|
||||
'playlist_count': 30,
|
||||
'playlist_count': 80,
|
||||
},
|
||||
{
|
||||
'url': 'https://www.redgifs.com/users/lamsinka89?order=best&type=g',
|
||||
|
@ -240,7 +240,17 @@ class RedGifsUserIE(RedGifsBaseInfoExtractor):
|
|||
'title': 'lamsinka89',
|
||||
'description': 'RedGifs user lamsinka89, ordered by best',
|
||||
},
|
||||
'playlist_mincount': 100,
|
||||
'playlist_mincount': 391,
|
||||
},
|
||||
{
|
||||
'url': 'https://www.redgifs.com/users/ignored52',
|
||||
'note': 'https://github.com/yt-dlp/yt-dlp/issues/7382',
|
||||
'info_dict': {
|
||||
'id': 'ignored52',
|
||||
'title': 'ignored52',
|
||||
'description': 'RedGifs user ignored52, ordered by recent',
|
||||
},
|
||||
'playlist_mincount': 121,
|
||||
},
|
||||
]
|
||||
|
||||
|
|
|
@ -1,22 +1,27 @@
|
|||
import base64
|
||||
import json
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import js_to_json
|
||||
from ..utils import ExtractorError, determine_ext, join_nonempty
|
||||
|
||||
|
||||
def decode_b64_url(code):
|
||||
decoded_url = re.match(r'[^[]*\[([^]]*)\]', code).groups()[0]
|
||||
return base64.b64decode(
|
||||
urllib.parse.unquote(re.sub(r'[\s"\',]', '', decoded_url)),
|
||||
).decode('utf-8')
|
||||
|
||||
|
||||
class RTPIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?rtp\.pt/play/(?:(?:estudoemcasa|palco|zigzag)/)?p(?P<program_id>[0-9]+)/(?P<id>[^/?#]+)'
|
||||
_VALID_URL = r'https?://(?:(?:(?:www\.)?rtp\.pt/play/(?P<subarea>.*/)?p(?P<program_id>[0-9]+)/(?P<episode_id>e[0-9]+/)?)|(?:arquivos\.rtp\.pt/conteudos/))(?P<id>[^/?#]+)/?'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.rtp.pt/play/p405/e174042/paixoes-cruzadas',
|
||||
'md5': 'e736ce0c665e459ddb818546220b4ef8',
|
||||
'url': 'https://www.rtp.pt/play/p9165/e562949/por-do-sol',
|
||||
'info_dict': {
|
||||
'id': 'e174042',
|
||||
'ext': 'mp3',
|
||||
'title': 'Paixões Cruzadas',
|
||||
'description': 'As paixões musicais de António Cartaxo e António Macedo',
|
||||
'id': 'por-do-sol',
|
||||
'ext': 'mp4',
|
||||
'title': 'Pôr do Sol Episódio 1 - de 16 Ago 2021',
|
||||
'description': 'Madalena Bourbon de Linhaça vive atormentada pelo segredo que esconde desde 1990. Matilde Bourbon de Linhaça sonha fugir com o seu amor proibido. O en',
|
||||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
},
|
||||
}, {
|
||||
|
@ -30,76 +35,82 @@ class RTPIE(InfoExtractor):
|
|||
'thumbnail': r're:^https?://.*\.jpg',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.rtp.pt/play/p831/a-quimica-das-coisas',
|
||||
'url': 'https://www.rtp.pt/play/p831/e205093/a-quimica-das-coisas',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.rtp.pt/play/estudoemcasa/p7776/portugues-1-ano',
|
||||
'url': 'https://www.rtp.pt/play/estudoemcasa/p7776/e500050/portugues-1-ano',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.rtp.pt/play/palco/p13785/l7nnon',
|
||||
'url': 'https://www.rtp.pt/play/palco/p9138/jose-afonso-traz-um-amigo-tambem',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.rtp.pt/play/p510/e798152/aleixo-fm',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_RX_OBFUSCATION = re.compile(r'''(?xs)
|
||||
atob\s*\(\s*decodeURIComponent\s*\(\s*
|
||||
(\[[0-9A-Za-z%,'"]*\])
|
||||
\s*\.\s*join\(\s*(?:""|'')\s*\)\s*\)\s*\)
|
||||
''')
|
||||
|
||||
def __unobfuscate(self, data, *, video_id):
|
||||
if data.startswith('{'):
|
||||
data = self._RX_OBFUSCATION.sub(
|
||||
lambda m: json.dumps(
|
||||
base64.b64decode(urllib.parse.unquote(
|
||||
''.join(self._parse_json(m.group(1), video_id)),
|
||||
)).decode('iso-8859-1')),
|
||||
data)
|
||||
return js_to_json(data)
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
title = self._html_search_meta(
|
||||
'twitter:title', webpage, display_name='title', fatal=True)
|
||||
|
||||
f, config = self._search_regex(
|
||||
r'''(?sx)
|
||||
(?:var\s+f\s*=\s*(?P<f>".*?"|{[^;]+?});\s*)?
|
||||
var\s+player1\s+=\s+new\s+RTPPlayer\s*\((?P<config>{(?:(?!\*/).)+?})\);(?!\s*\*/)
|
||||
''', webpage,
|
||||
'player config', group=('f', 'config'))
|
||||
# Remove comments from webpage source
|
||||
webpage = re.sub(r'(?s)/\*.*\*/', '', webpage)
|
||||
webpage = re.sub(r'(?m)(?:^|\s)//.*$', '', webpage)
|
||||
|
||||
config = self._parse_json(
|
||||
config, video_id,
|
||||
lambda data: self.__unobfuscate(data, video_id=video_id))
|
||||
f = config['file'] if not f else self._parse_json(
|
||||
f, video_id,
|
||||
lambda data: self.__unobfuscate(data, video_id=video_id))
|
||||
title = self._html_search_regex(r'<title>(.+?)</title>', webpage, 'title', default='')
|
||||
# Replace irrelevant text in title
|
||||
title = title.replace(' - RTP Play - RTP', '') or self._html_search_meta('twitter:title', webpage)
|
||||
|
||||
formats = []
|
||||
if isinstance(f, dict):
|
||||
f_hls = f.get('hls')
|
||||
if f_hls is not None:
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
f_hls, video_id, 'mp4', 'm3u8_native', m3u8_id='hls'))
|
||||
if 'Este episódio não se encontra disponí' in title:
|
||||
raise ExtractorError('Episode unavailable', expected=True)
|
||||
|
||||
f_dash = f.get('dash')
|
||||
if f_dash is not None:
|
||||
formats.extend(self._extract_mpd_formats(f_dash, video_id, mpd_id='dash'))
|
||||
part = self._html_search_regex(r'section\-parts.*<span.*>(.+?)</span>.*</ul>', webpage, 'part', default=None)
|
||||
title = join_nonempty(title, part, delim=' ')
|
||||
|
||||
# Get file key
|
||||
file_key = self._search_regex(r'\s*fileKey: "([^"]+)",', webpage, 'file key - open', default=None)
|
||||
if file_key is None:
|
||||
self.write_debug('url: obfuscated')
|
||||
file_key = self._search_regex(r'\s*fileKey: atob\( decodeURIComponent\((.*)\)\)\),', webpage, 'file key')
|
||||
url = decode_b64_url(file_key) or ''
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': 'f',
|
||||
'url': f,
|
||||
'vcodec': 'none' if config.get('mediaType') == 'audio' else None,
|
||||
})
|
||||
self.write_debug('url: clean')
|
||||
url = file_key
|
||||
|
||||
if 'mp3' in url:
|
||||
full_url = 'https://cdn-ondemand.rtp.pt' + url
|
||||
elif 'mp4' in url:
|
||||
full_url = f'https://streaming-vod.rtp.pt/dash{url}/manifest.mpd'
|
||||
else:
|
||||
full_url = None
|
||||
|
||||
if not full_url:
|
||||
raise ExtractorError('No valid media source found in page')
|
||||
|
||||
poster = self._search_regex(r'\s*poster: "([^"]+)"', webpage, 'poster', fatal=False)
|
||||
|
||||
# Finally send pure JSON string for JSON parsing
|
||||
full_url = full_url.replace('drm-dash', 'dash')
|
||||
ext = determine_ext(full_url)
|
||||
|
||||
if ext == 'mpd':
|
||||
# Download via mpd file
|
||||
self.write_debug('formats: mpd')
|
||||
formats = self._extract_mpd_formats(full_url, video_id)
|
||||
else:
|
||||
self.write_debug('formats: ext={ext}')
|
||||
formats = [{
|
||||
'url': full_url,
|
||||
'ext': ext,
|
||||
}]
|
||||
|
||||
subtitles = {}
|
||||
|
||||
vtt = config.get('vtt')
|
||||
vtt = self._search_regex(r'\s*vtt: (.*]]),\s+', webpage, 'vtt', default=None)
|
||||
if vtt is not None:
|
||||
for lcode, lname, url in vtt:
|
||||
subtitles.setdefault(lcode, []).append({
|
||||
vtt_object = self._parse_json(vtt.replace("'", '"'), full_url)
|
||||
self.write_debug(f'vtt: {len(vtt_object)} subtitles')
|
||||
for lcode, lname, url in vtt_object:
|
||||
subtitles.setdefault(lcode.lower(), []).append({
|
||||
'name': lname,
|
||||
'url': url,
|
||||
})
|
||||
|
@ -109,6 +120,6 @@ def _real_extract(self, url):
|
|||
'title': title,
|
||||
'formats': formats,
|
||||
'description': self._html_search_meta(['description', 'twitter:description'], webpage),
|
||||
'thumbnail': config.get('poster') or self._og_search_thumbnail(webpage),
|
||||
'thumbnail': poster or self._og_search_thumbnail(webpage),
|
||||
'subtitles': subtitles,
|
||||
}
|
||||
|
|
|
@ -13,7 +13,10 @@
|
|||
unified_timestamp,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
from ..utils.traversal import (
|
||||
subs_list_to_dict,
|
||||
traverse_obj,
|
||||
)
|
||||
|
||||
|
||||
class RutubeBaseIE(InfoExtractor):
|
||||
|
@ -92,11 +95,11 @@ def _extract_formats_and_subtitles(self, options, video_id):
|
|||
hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
for caption in traverse_obj(options, ('captions', lambda _, v: url_or_none(v['file']))):
|
||||
subtitles.setdefault(caption.get('code') or 'ru', []).append({
|
||||
'url': caption['file'],
|
||||
'name': caption.get('langTitle'),
|
||||
})
|
||||
self._merge_subtitles(traverse_obj(options, ('captions', ..., {
|
||||
'id': 'code',
|
||||
'url': 'file',
|
||||
'name': ('langTitle', {str}),
|
||||
}, all, {subs_list_to_dict(lang='ru')})), target=subtitles)
|
||||
return formats, subtitles
|
||||
|
||||
def _download_and_extract_formats_and_subtitles(self, video_id, query=None):
|
||||
|
|
|
@ -1,11 +1,9 @@
|
|||
import base64
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..aes import aes_cbc_decrypt, unpad_pkcs7
|
||||
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
bytes_to_intlist,
|
||||
intlist_to_bytes,
|
||||
unified_strdate,
|
||||
)
|
||||
|
||||
|
@ -68,10 +66,10 @@ def _real_extract(self, url):
|
|||
data_json = self._download_json('https://www.shemaroome.com/users/user_all_lists', video_id, data=data.encode())
|
||||
if not data_json.get('status'):
|
||||
raise ExtractorError('Premium videos cannot be downloaded yet.', expected=True)
|
||||
url_data = bytes_to_intlist(base64.b64decode(data_json['new_play_url']))
|
||||
key = bytes_to_intlist(base64.b64decode(data_json['key']))
|
||||
iv = [0] * 16
|
||||
m3u8_url = unpad_pkcs7(intlist_to_bytes(aes_cbc_decrypt(url_data, key, iv))).decode('ascii')
|
||||
url_data = base64.b64decode(data_json['new_play_url'])
|
||||
key = base64.b64decode(data_json['key'])
|
||||
iv = bytes(16)
|
||||
m3u8_url = unpad_pkcs7(aes_cbc_decrypt_bytes(url_data, key, iv)).decode('ascii')
|
||||
headers = {'stream_key': data_json['stream_key']}
|
||||
formats, m3u8_subs = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, fatal=False, headers=headers)
|
||||
for fmt in formats:
|
||||
|
|
|
@ -199,8 +199,9 @@ class SonyLIVSeriesIE(InfoExtractor):
|
|||
},
|
||||
}]
|
||||
_API_BASE = 'https://apiv2.sonyliv.com/AGL'
|
||||
_SORT_ORDERS = ('asc', 'desc')
|
||||
|
||||
def _entries(self, show_id):
|
||||
def _entries(self, show_id, sort_order):
|
||||
headers = {
|
||||
'Accept': 'application/json, text/plain, */*',
|
||||
'Referer': 'https://www.sonyliv.com',
|
||||
|
@ -215,6 +216,9 @@ def _entries(self, show_id):
|
|||
'from': '0',
|
||||
'to': '49',
|
||||
}), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id'])))
|
||||
|
||||
if sort_order == 'desc':
|
||||
seasons = reversed(seasons)
|
||||
for season in seasons:
|
||||
season_id = str(season['id'])
|
||||
note = traverse_obj(season, ('metadata', 'title', {str})) or 'season'
|
||||
|
@ -226,7 +230,7 @@ def _entries(self, show_id):
|
|||
'from': str(cursor),
|
||||
'to': str(cursor + 99),
|
||||
'orderBy': 'episodeNumber',
|
||||
'sortOrder': 'asc',
|
||||
'sortOrder': sort_order,
|
||||
}), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id'])))
|
||||
if not episodes:
|
||||
break
|
||||
|
@ -237,4 +241,10 @@ def _entries(self, show_id):
|
|||
|
||||
def _real_extract(self, url):
|
||||
show_id = self._match_id(url)
|
||||
return self.playlist_result(self._entries(show_id), playlist_id=show_id)
|
||||
|
||||
sort_order = self._configuration_arg('sort_order', [self._SORT_ORDERS[0]])[0]
|
||||
if sort_order not in self._SORT_ORDERS:
|
||||
raise ValueError(
|
||||
f'Invalid sort order "{sort_order}". Allowed values are: {", ".join(self._SORT_ORDERS)}')
|
||||
|
||||
return self.playlist_result(self._entries(show_id, sort_order), playlist_id=show_id)
|
||||
|
|
|
@ -241,7 +241,7 @@ def _extract_info_dict(self, info, full_title=None, secret_token=None, extract_f
|
|||
format_urls.add(format_url)
|
||||
formats.append({
|
||||
'format_id': 'download',
|
||||
'ext': urlhandle_detect_ext(urlh) or 'mp3',
|
||||
'ext': urlhandle_detect_ext(urlh, default='mp3'),
|
||||
'filesize': int_or_none(urlh.headers.get('Content-Length')),
|
||||
'url': format_url,
|
||||
'quality': 10,
|
||||
|
|
|
@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor):
|
|||
def _real_extract(self, url):
|
||||
mobj = self._match_valid_url(url)
|
||||
video_id = mobj.group('id') or mobj.group('id_2')
|
||||
country = self.get_param('geo_bypass_country') or 'US'
|
||||
self._set_cookie('.spankbang.com', 'country', country.upper())
|
||||
webpage = self._download_webpage(
|
||||
url.replace(f'/{video_id}/embed', f'/{video_id}/video'),
|
||||
video_id, headers={'Cookie': 'country=US'})
|
||||
video_id, impersonate=True)
|
||||
|
||||
if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage):
|
||||
raise ExtractorError(
|
||||
|
|
113
yt_dlp/extractor/uliza.py
Normal file
113
yt_dlp/extractor/uliza.py
Normal file
|
@ -0,0 +1,113 @@
|
|||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
make_archive_id,
|
||||
parse_qs,
|
||||
time_seconds,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class UlizaPlayerIE(InfoExtractor):
|
||||
_VALID_URL = r'https://player-api\.p\.uliza\.jp/v1/players/[^?#]+\?(?:[^#]*&)?name=(?P<id>[^#&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://player-api.p.uliza.jp/v1/players/timeshift-disabled/pia/admin?type=normal&playerobjectname=ulizaPlayer&name=livestream01_dvr&repeatable=true',
|
||||
'info_dict': {
|
||||
'id': '88f3109a-f503-4d0f-a9f7-9f39ac745d84',
|
||||
'ext': 'mp4',
|
||||
'title': '88f3109a-f503-4d0f-a9f7-9f39ac745d84',
|
||||
'live_status': 'was_live',
|
||||
'_old_archive_ids': ['piaulizaportal 88f3109a-f503-4d0f-a9f7-9f39ac745d84'],
|
||||
},
|
||||
}, {
|
||||
'url': 'https://player-api.p.uliza.jp/v1/players/uliza_jp_gallery_normal/promotion/admin?type=presentation&name=cookings&targetid=player1',
|
||||
'info_dict': {
|
||||
'id': 'ae350126-5e22-4a7f-a8ac-8d0fd448b800',
|
||||
'ext': 'mp4',
|
||||
'title': 'ae350126-5e22-4a7f-a8ac-8d0fd448b800',
|
||||
'live_status': 'not_live',
|
||||
'_old_archive_ids': ['piaulizaportal ae350126-5e22-4a7f-a8ac-8d0fd448b800'],
|
||||
},
|
||||
}, {
|
||||
'url': 'https://player-api.p.uliza.jp/v1/players/default-player/pia/admin?type=normal&name=pia_movie_uliza_fix&targetid=ulizahtml5&repeatable=true',
|
||||
'info_dict': {
|
||||
'id': '0644ecc8-e354-41b4-b957-3b08a2d63df1',
|
||||
'ext': 'mp4',
|
||||
'title': '0644ecc8-e354-41b4-b957-3b08a2d63df1',
|
||||
'live_status': 'not_live',
|
||||
'_old_archive_ids': ['piaulizaportal 0644ecc8-e354-41b4-b957-3b08a2d63df1'],
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
player_data = self._download_webpage(
|
||||
url, display_id, headers={'Referer': 'https://player-api.p.uliza.jp/'},
|
||||
note='Fetching player data', errnote='Unable to fetch player data')
|
||||
|
||||
m3u8_url = self._search_regex(
|
||||
r'["\'](https://vms-api\.p\.uliza\.jp/v1/prog-index\.m3u8[^"\']+)', player_data, 'm3u8 url')
|
||||
video_id = parse_qs(m3u8_url).get('ss', [display_id])[0]
|
||||
|
||||
formats = self._extract_m3u8_formats(m3u8_url, video_id)
|
||||
m3u8_type = self._search_regex(
|
||||
r'/hls/(dvr|video)/', traverse_obj(formats, (0, 'url')), 'm3u8 type', default=None)
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': video_id,
|
||||
'formats': formats,
|
||||
'live_status': {
|
||||
'video': 'is_live',
|
||||
'dvr': 'was_live', # short-term archives
|
||||
}.get(m3u8_type, 'not_live'), # VOD or long-term archives
|
||||
'_old_archive_ids': [make_archive_id('PIAULIZAPortal', video_id)],
|
||||
}
|
||||
|
||||
|
||||
class UlizaPortalIE(InfoExtractor):
|
||||
IE_DESC = 'ulizaportal.jp'
|
||||
_VALID_URL = r'https?://(?:www\.)?ulizaportal\.jp/pages/(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})'
|
||||
_TESTS = [{
|
||||
'url': 'https://ulizaportal.jp/pages/005f18b7-e810-5618-cb82-0987c5755d44',
|
||||
'info_dict': {
|
||||
'id': 'ae350126-5e22-4a7f-a8ac-8d0fd448b800',
|
||||
'display_id': '005f18b7-e810-5618-cb82-0987c5755d44',
|
||||
'title': 'プレゼンテーションプレイヤーのサンプル',
|
||||
'live_status': 'not_live',
|
||||
'_old_archive_ids': ['piaulizaportal ae350126-5e22-4a7f-a8ac-8d0fd448b800'],
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://ulizaportal.jp/pages/005e1b23-fe93-5780-19a0-98e917cc4b7d?expires=4102412400&signature=f422a993b683e1068f946caf406d211c17d1ef17da8bef3df4a519502155aa91&version=1',
|
||||
'info_dict': {
|
||||
'id': '0644ecc8-e354-41b4-b957-3b08a2d63df1',
|
||||
'display_id': '005e1b23-fe93-5780-19a0-98e917cc4b7d',
|
||||
'title': '【確認用】視聴サンプルページ(ULIZA)',
|
||||
'live_status': 'not_live',
|
||||
'_old_archive_ids': ['piaulizaportal 0644ecc8-e354-41b4-b957-3b08a2d63df1'],
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
expires = int_or_none(traverse_obj(parse_qs(url), ('expires', 0)))
|
||||
if expires and expires <= time_seconds():
|
||||
raise ExtractorError('The link is expired', video_id=video_id, expected=True)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
player_data_url = self._search_regex(
|
||||
r'<script [^>]*\bsrc="(https://player-api\.p\.uliza\.jp/v1/players/[^"]+)"',
|
||||
webpage, 'player data url')
|
||||
return self.url_result(
|
||||
player_data_url, UlizaPlayerIE, url_transparent=True,
|
||||
display_id=video_id, video_title=self._html_extract_title(webpage))
|
|
@ -1,189 +0,0 @@
|
|||
import functools
|
||||
import json
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
OnDemandPagedList,
|
||||
int_or_none,
|
||||
parse_duration,
|
||||
qualities,
|
||||
remove_start,
|
||||
strip_or_none,
|
||||
)
|
||||
|
||||
|
||||
class VeohIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?veoh\.com/(?:watch|videos|embed|iphone/#_Watch)/(?P<id>(?:v|e|yapi-)[\da-zA-Z]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://www.veoh.com/watch/v56314296nk7Zdmz3',
|
||||
'md5': '620e68e6a3cff80086df3348426c9ca3',
|
||||
'info_dict': {
|
||||
'id': 'v56314296nk7Zdmz3',
|
||||
'ext': 'mp4',
|
||||
'title': 'Straight Backs Are Stronger',
|
||||
'description': 'md5:203f976279939a6dc664d4001e13f5f4',
|
||||
'thumbnail': 're:https://fcache\\.veoh\\.com/file/f/th56314296\\.jpg(\\?.*)?',
|
||||
'uploader': 'LUMOback',
|
||||
'duration': 46,
|
||||
'view_count': int,
|
||||
'average_rating': int,
|
||||
'comment_count': int,
|
||||
'age_limit': 0,
|
||||
'categories': ['technology_and_gaming'],
|
||||
'tags': ['posture', 'posture', 'sensor', 'back', 'pain', 'wearable', 'tech', 'lumo'],
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.veoh.com/embed/v56314296nk7Zdmz3',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.veoh.com/watch/v27701988pbTc4wzN?h1=Chile+workers+cover+up+to+avoid+skin+damage',
|
||||
'md5': '4a6ff84b87d536a6a71e6aa6c0ad07fa',
|
||||
'info_dict': {
|
||||
'id': '27701988',
|
||||
'ext': 'mp4',
|
||||
'title': 'Chile workers cover up to avoid skin damage',
|
||||
'description': 'md5:2bd151625a60a32822873efc246ba20d',
|
||||
'uploader': 'afp-news',
|
||||
'duration': 123,
|
||||
},
|
||||
'skip': 'This video has been deleted.',
|
||||
}, {
|
||||
'url': 'http://www.veoh.com/watch/v69525809F6Nc4frX',
|
||||
'md5': '4fde7b9e33577bab2f2f8f260e30e979',
|
||||
'note': 'Embedded ooyala video',
|
||||
'info_dict': {
|
||||
'id': '69525809',
|
||||
'ext': 'mp4',
|
||||
'title': 'Doctors Alter Plan For Preteen\'s Weight Loss Surgery',
|
||||
'description': 'md5:f5a11c51f8fb51d2315bca0937526891',
|
||||
'uploader': 'newsy-videos',
|
||||
},
|
||||
'skip': 'This video has been deleted.',
|
||||
}, {
|
||||
'url': 'http://www.veoh.com/watch/e152215AJxZktGS',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.veoh.com/videos/v16374379WA437rMH',
|
||||
'md5': 'cceb73f3909063d64f4b93d4defca1b3',
|
||||
'info_dict': {
|
||||
'id': 'v16374379WA437rMH',
|
||||
'ext': 'mp4',
|
||||
'title': 'Phantasmagoria 2, pt. 1-3',
|
||||
'description': 'Phantasmagoria: a Puzzle of Flesh',
|
||||
'thumbnail': 're:https://fcache\\.veoh\\.com/file/f/th16374379\\.jpg(\\?.*)?',
|
||||
'uploader': 'davidspackage',
|
||||
'duration': 968,
|
||||
'view_count': int,
|
||||
'average_rating': int,
|
||||
'comment_count': int,
|
||||
'age_limit': 18,
|
||||
'categories': ['technology_and_gaming', 'gaming'],
|
||||
'tags': ['puzzle', 'of', 'flesh'],
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
metadata = self._download_json(
|
||||
'https://www.veoh.com/watch/getVideo/' + video_id,
|
||||
video_id)
|
||||
video = metadata['video']
|
||||
title = video['title']
|
||||
|
||||
thumbnail_url = None
|
||||
q = qualities(['Regular', 'HQ'])
|
||||
formats = []
|
||||
for f_id, f_url in video.get('src', {}).items():
|
||||
if not f_url:
|
||||
continue
|
||||
if f_id == 'poster':
|
||||
thumbnail_url = f_url
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': f_id,
|
||||
'quality': q(f_id),
|
||||
'url': f_url,
|
||||
})
|
||||
|
||||
categories = metadata.get('categoryPath')
|
||||
if not categories:
|
||||
category = remove_start(strip_or_none(video.get('category')), 'category_')
|
||||
categories = [category] if category else None
|
||||
tags = video.get('tags')
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'description': video.get('description'),
|
||||
'thumbnail': thumbnail_url,
|
||||
'uploader': video.get('author', {}).get('nickname'),
|
||||
'duration': int_or_none(video.get('lengthBySec')) or parse_duration(video.get('length')),
|
||||
'view_count': int_or_none(video.get('views')),
|
||||
'formats': formats,
|
||||
'average_rating': int_or_none(video.get('rating')),
|
||||
'comment_count': int_or_none(video.get('numOfComments')),
|
||||
'age_limit': 18 if video.get('contentRatingId') == 2 else 0,
|
||||
'categories': categories,
|
||||
'tags': tags.split(', ') if tags else None,
|
||||
}
|
||||
|
||||
|
||||
class VeohUserIE(VeohIE): # XXX: Do not subclass from concrete IE
|
||||
_VALID_URL = r'https?://(?:www\.)?veoh\.com/users/(?P<id>[\w-]+)'
|
||||
IE_NAME = 'veoh:user'
|
||||
|
||||
_TESTS = [
|
||||
{
|
||||
'url': 'https://www.veoh.com/users/valentinazoe',
|
||||
'info_dict': {
|
||||
'id': 'valentinazoe',
|
||||
'title': 'valentinazoe (Uploads)',
|
||||
},
|
||||
'playlist_mincount': 75,
|
||||
},
|
||||
{
|
||||
'url': 'https://www.veoh.com/users/PiensaLibre',
|
||||
'info_dict': {
|
||||
'id': 'PiensaLibre',
|
||||
'title': 'PiensaLibre (Uploads)',
|
||||
},
|
||||
'playlist_mincount': 2,
|
||||
}]
|
||||
|
||||
_PAGE_SIZE = 16
|
||||
|
||||
def _fetch_page(self, uploader, page):
|
||||
response = self._download_json(
|
||||
'https://www.veoh.com/users/published/videos', uploader,
|
||||
note=f'Downloading videos page {page + 1}',
|
||||
headers={
|
||||
'x-csrf-token': self._TOKEN,
|
||||
'content-type': 'application/json;charset=UTF-8',
|
||||
},
|
||||
data=json.dumps({
|
||||
'username': uploader,
|
||||
'maxResults': self._PAGE_SIZE,
|
||||
'page': page + 1,
|
||||
'requestName': 'userPage',
|
||||
}).encode())
|
||||
if not response.get('success'):
|
||||
raise ExtractorError(response['message'])
|
||||
|
||||
for video in response['videos']:
|
||||
yield self.url_result(f'https://www.veoh.com/watch/{video["permalinkId"]}', VeohIE,
|
||||
video['permalinkId'], video.get('title'))
|
||||
|
||||
def _real_initialize(self):
|
||||
webpage = self._download_webpage(
|
||||
'https://www.veoh.com', None, note='Downloading authorization token')
|
||||
self._TOKEN = self._search_regex(
|
||||
r'csrfToken:\s*(["\'])(?P<token>[0-9a-zA-Z]{40})\1', webpage,
|
||||
'request token', group='token')
|
||||
|
||||
def _real_extract(self, url):
|
||||
uploader = self._match_id(url)
|
||||
return self.playlist_result(OnDemandPagedList(
|
||||
functools.partial(self._fetch_page, uploader),
|
||||
self._PAGE_SIZE), uploader, f'{uploader} (Uploads)')
|
|
@ -22,7 +22,7 @@
|
|||
from .common import InfoExtractor, SearchInfoExtractor
|
||||
from .openload import PhantomJSwrapper
|
||||
from ..jsinterp import JSInterpreter
|
||||
from ..networking.exceptions import HTTPError, TransportError, network_exceptions
|
||||
from ..networking.exceptions import HTTPError, network_exceptions
|
||||
from ..utils import (
|
||||
NO_DEFAULT,
|
||||
ExtractorError,
|
||||
|
@ -50,12 +50,12 @@
|
|||
parse_iso8601,
|
||||
parse_qs,
|
||||
qualities,
|
||||
remove_end,
|
||||
remove_start,
|
||||
smuggle_url,
|
||||
str_or_none,
|
||||
str_to_int,
|
||||
strftime_or_none,
|
||||
time_seconds,
|
||||
traverse_obj,
|
||||
try_call,
|
||||
try_get,
|
||||
|
@ -124,14 +124,15 @@
|
|||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 62,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
'android': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'ANDROID',
|
||||
'clientVersion': '19.29.37',
|
||||
'clientVersion': '19.44.38',
|
||||
'androidSdkVersion': 30,
|
||||
'userAgent': 'com.google.android.youtube/19.29.37 (Linux; U; Android 11) gzip',
|
||||
'userAgent': 'com.google.android.youtube/19.44.38 (Linux; U; Android 11) gzip',
|
||||
'osName': 'Android',
|
||||
'osVersion': '11',
|
||||
},
|
||||
|
@ -140,13 +141,14 @@
|
|||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'android_music': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'ANDROID_MUSIC',
|
||||
'clientVersion': '7.11.50',
|
||||
'clientVersion': '7.27.52',
|
||||
'androidSdkVersion': 30,
|
||||
'userAgent': 'com.google.android.apps.youtube.music/7.11.50 (Linux; U; Android 11) gzip',
|
||||
'userAgent': 'com.google.android.apps.youtube.music/7.27.52 (Linux; U; Android 11) gzip',
|
||||
'osName': 'Android',
|
||||
'osVersion': '11',
|
||||
},
|
||||
|
@ -154,15 +156,16 @@
|
|||
'INNERTUBE_CONTEXT_CLIENT_NAME': 21,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'android_creator': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'ANDROID_CREATOR',
|
||||
'clientVersion': '24.30.100',
|
||||
'clientVersion': '24.45.100',
|
||||
'androidSdkVersion': 30,
|
||||
'userAgent': 'com.google.android.apps.youtube.creator/24.30.100 (Linux; U; Android 11) gzip',
|
||||
'userAgent': 'com.google.android.apps.youtube.creator/24.45.100 (Linux; U; Android 11) gzip',
|
||||
'osName': 'Android',
|
||||
'osVersion': '11',
|
||||
},
|
||||
|
@ -170,17 +173,18 @@
|
|||
'INNERTUBE_CONTEXT_CLIENT_NAME': 14,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# YouTube Kids videos aren't returned on this client for some reason
|
||||
'android_vr': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'ANDROID_VR',
|
||||
'clientVersion': '1.57.29',
|
||||
'clientVersion': '1.60.19',
|
||||
'deviceMake': 'Oculus',
|
||||
'deviceModel': 'Quest 3',
|
||||
'androidSdkVersion': 32,
|
||||
'userAgent': 'com.google.android.apps.youtube.vr.oculus/1.57.29 (Linux; U; Android 12L; eureka-user Build/SQ3A.220605.009.A1) gzip',
|
||||
'userAgent': 'com.google.android.apps.youtube.vr.oculus/1.60.19 (Linux; U; Android 12L; eureka-user Build/SQ3A.220605.009.A1) gzip',
|
||||
'osName': 'Android',
|
||||
'osVersion': '12L',
|
||||
},
|
||||
|
@ -188,68 +192,56 @@
|
|||
'INNERTUBE_CONTEXT_CLIENT_NAME': 28,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
},
|
||||
'android_testsuite': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'ANDROID_TESTSUITE',
|
||||
'clientVersion': '1.9',
|
||||
'androidSdkVersion': 30,
|
||||
'userAgent': 'com.google.android.youtube/1.9 (Linux; U; Android 11) gzip',
|
||||
'osName': 'Android',
|
||||
'osVersion': '11',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 30,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'PLAYER_PARAMS': '2AMB',
|
||||
},
|
||||
# iOS clients have HLS live streams. Setting device model to get 60fps formats.
|
||||
# See: https://github.com/TeamNewPipe/NewPipeExtractor/issues/680#issuecomment-1002724558
|
||||
'ios': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'IOS',
|
||||
'clientVersion': '19.29.1',
|
||||
'clientVersion': '19.45.4',
|
||||
'deviceMake': 'Apple',
|
||||
'deviceModel': 'iPhone16,2',
|
||||
'userAgent': 'com.google.ios.youtube/19.29.1 (iPhone16,2; U; CPU iOS 17_5_1 like Mac OS X;)',
|
||||
'userAgent': 'com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)',
|
||||
'osName': 'iPhone',
|
||||
'osVersion': '17.5.1.21F90',
|
||||
'osVersion': '18.1.0.22B83',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 5,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'ios_music': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'IOS_MUSIC',
|
||||
'clientVersion': '7.08.2',
|
||||
'clientVersion': '7.27.0',
|
||||
'deviceMake': 'Apple',
|
||||
'deviceModel': 'iPhone16,2',
|
||||
'userAgent': 'com.google.ios.youtubemusic/7.08.2 (iPhone16,2; U; CPU iOS 17_5_1 like Mac OS X;)',
|
||||
'userAgent': 'com.google.ios.youtubemusic/7.27.0 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)',
|
||||
'osName': 'iPhone',
|
||||
'osVersion': '17.5.1.21F90',
|
||||
'osVersion': '18.1.0.22B83',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 26,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'ios_creator': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'IOS_CREATOR',
|
||||
'clientVersion': '24.30.100',
|
||||
'clientVersion': '24.45.100',
|
||||
'deviceMake': 'Apple',
|
||||
'deviceModel': 'iPhone16,2',
|
||||
'userAgent': 'com.google.ios.ytcreator/24.30.100 (iPhone16,2; U; CPU iOS 17_5_1 like Mac OS X;)',
|
||||
'userAgent': 'com.google.ios.ytcreator/24.45.100 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)',
|
||||
'osName': 'iPhone',
|
||||
'osVersion': '17.5.1.21F90',
|
||||
'osVersion': '18.1.0.22B83',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 15,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# mweb has 'ultralow' formats
|
||||
# See: https://github.com/yt-dlp/yt-dlp/pull/557
|
||||
|
@ -282,8 +274,10 @@
|
|||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 85,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# This client has pre-merged video+audio 720p/1080p streams
|
||||
# This client now requires sign-in for every video
|
||||
# It may be able to receive pre-merged video+audio 720p/1080p streams
|
||||
'mediaconnect': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
|
@ -293,6 +287,7 @@
|
|||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 95,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -321,6 +316,7 @@ def build_innertube_clients():
|
|||
ytcfg.setdefault('INNERTUBE_HOST', 'www.youtube.com')
|
||||
ytcfg.setdefault('REQUIRE_JS_PLAYER', True)
|
||||
ytcfg.setdefault('REQUIRE_PO_TOKEN', False)
|
||||
ytcfg.setdefault('REQUIRE_AUTH', False)
|
||||
ytcfg.setdefault('PLAYER_PARAMS', None)
|
||||
ytcfg['INNERTUBE_CONTEXT']['client'].setdefault('hl', 'en')
|
||||
|
||||
|
@ -577,208 +573,18 @@ def _real_initialize(self):
|
|||
self._check_login_required()
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
auth_type, _, user = (username or '').partition('+')
|
||||
|
||||
if auth_type != 'oauth':
|
||||
raise ExtractorError(self._youtube_login_hint, expected=True)
|
||||
|
||||
self._initialize_oauth(user, password)
|
||||
|
||||
'''
|
||||
OAuth 2.0 Device Authorization Grant flow, used by the YouTube TV client (youtube.com/tv).
|
||||
|
||||
For more information regarding OAuth 2.0 and the Device Authorization Grant flow in general, see:
|
||||
- https://developers.google.com/identity/protocols/oauth2/limited-input-device
|
||||
- https://accounts.google.com/.well-known/openid-configuration
|
||||
- https://www.rfc-editor.org/rfc/rfc8628
|
||||
- https://www.rfc-editor.org/rfc/rfc6749
|
||||
|
||||
Note: The official client appears to use a proxied version of the oauth2 endpoints on youtube.com/o/oauth2,
|
||||
which applies some modifications to the response (such as returning errors as 200 OK).
|
||||
Since the client works with the standard API, we will use that as it is well-documented.
|
||||
'''
|
||||
|
||||
_OAUTH_PROFILE = None
|
||||
_OAUTH_ACCESS_TOKEN_CACHE = {}
|
||||
_OAUTH_DISPLAY_ID = 'oauth'
|
||||
|
||||
# YouTube TV (TVHTML5) client. You can find these at youtube.com/tv
|
||||
_OAUTH_CLIENT_ID = '861556708454-d6dlm3lh05idd8npek18k6be8ba3oc68.apps.googleusercontent.com'
|
||||
_OAUTH_CLIENT_SECRET = 'SboVhoG9s0rNafixCSGGKXAT'
|
||||
_OAUTH_SCOPE = 'http://gdata.youtube.com https://www.googleapis.com/auth/youtube-paid-content'
|
||||
|
||||
# From https://accounts.google.com/.well-known/openid-configuration
|
||||
# Technically, these should be fetched dynamically and not hard-coded.
|
||||
# However, as these endpoints rarely change, we can risk saving an extra request for every invocation.
|
||||
_OAUTH_DEVICE_AUTHORIZATION_ENDPOINT = 'https://oauth2.googleapis.com/device/code'
|
||||
_OAUTH_TOKEN_ENDPOINT = 'https://oauth2.googleapis.com/token'
|
||||
|
||||
@property
|
||||
def _oauth_cache_key(self):
|
||||
return f'oauth_refresh_token_{self._OAUTH_PROFILE}'
|
||||
|
||||
def _read_oauth_error_response(self, response):
|
||||
return traverse_obj(
|
||||
self._webpage_read_content(response, self._OAUTH_TOKEN_ENDPOINT, self._OAUTH_DISPLAY_ID, fatal=False),
|
||||
({json.loads}, 'error', {str}))
|
||||
|
||||
def _set_oauth_info(self, token_response):
|
||||
YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE.setdefault(self._OAUTH_PROFILE, {}).update({
|
||||
'access_token': token_response['access_token'],
|
||||
'token_type': token_response['token_type'],
|
||||
'expiry': time_seconds(
|
||||
seconds=traverse_obj(token_response, ('expires_in', {float_or_none}), default=300) - 10),
|
||||
})
|
||||
refresh_token = traverse_obj(token_response, ('refresh_token', {str}))
|
||||
if refresh_token:
|
||||
self.cache.store(self._NETRC_MACHINE, self._oauth_cache_key, refresh_token)
|
||||
YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE[self._OAUTH_PROFILE]['refresh_token'] = refresh_token
|
||||
|
||||
def _initialize_oauth(self, user, refresh_token):
|
||||
self._OAUTH_PROFILE = user or 'default'
|
||||
|
||||
if self._OAUTH_PROFILE in YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE:
|
||||
self.write_debug(f'{self._OAUTH_DISPLAY_ID}: Using cached access token for profile "{self._OAUTH_PROFILE}"')
|
||||
return
|
||||
|
||||
YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE[self._OAUTH_PROFILE] = {}
|
||||
|
||||
if refresh_token:
|
||||
msg = f'{self._OAUTH_DISPLAY_ID}: Using password input as refresh token'
|
||||
if self.get_param('cachedir') is not False:
|
||||
msg += ' and caching token to disk; you should supply an empty password next time'
|
||||
self.to_screen(msg)
|
||||
self.cache.store(self._NETRC_MACHINE, self._oauth_cache_key, refresh_token)
|
||||
else:
|
||||
refresh_token = self.cache.load(self._NETRC_MACHINE, self._oauth_cache_key)
|
||||
|
||||
if refresh_token:
|
||||
YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE[self._OAUTH_PROFILE]['refresh_token'] = refresh_token
|
||||
try:
|
||||
token_response = self._refresh_token(refresh_token)
|
||||
except ExtractorError as e:
|
||||
error_msg = str(e.orig_msg).replace('Failed to refresh access token: ', '')
|
||||
self.report_warning(f'{self._OAUTH_DISPLAY_ID}: Failed to refresh access token: {error_msg}')
|
||||
token_response = self._oauth_authorize
|
||||
else:
|
||||
token_response = self._oauth_authorize
|
||||
|
||||
self._set_oauth_info(token_response)
|
||||
self.write_debug(f'{self._OAUTH_DISPLAY_ID}: Logged in using profile "{self._OAUTH_PROFILE}"')
|
||||
|
||||
def _refresh_token(self, refresh_token):
|
||||
try:
|
||||
token_response = self._download_json(
|
||||
self._OAUTH_TOKEN_ENDPOINT,
|
||||
video_id=self._OAUTH_DISPLAY_ID,
|
||||
note='Refreshing access token',
|
||||
data=json.dumps({
|
||||
'client_id': self._OAUTH_CLIENT_ID,
|
||||
'client_secret': self._OAUTH_CLIENT_SECRET,
|
||||
'refresh_token': refresh_token,
|
||||
'grant_type': 'refresh_token',
|
||||
}).encode(),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError):
|
||||
error = self._read_oauth_error_response(e.cause.response)
|
||||
if error == 'invalid_grant':
|
||||
# RFC6749 § 5.2
|
||||
raise ExtractorError(
|
||||
'Failed to refresh access token: Refresh token is invalid, revoked, or expired (invalid_grant)',
|
||||
expected=True, video_id=self._OAUTH_DISPLAY_ID)
|
||||
raise ExtractorError(
|
||||
f'Failed to refresh access token: Authorization server returned error {error}',
|
||||
video_id=self._OAUTH_DISPLAY_ID)
|
||||
raise
|
||||
return token_response
|
||||
|
||||
@property
|
||||
def _oauth_authorize(self):
|
||||
code_response = self._download_json(
|
||||
self._OAUTH_DEVICE_AUTHORIZATION_ENDPOINT,
|
||||
video_id=self._OAUTH_DISPLAY_ID,
|
||||
note='Initializing authorization flow',
|
||||
data=json.dumps({
|
||||
'client_id': self._OAUTH_CLIENT_ID,
|
||||
'scope': self._OAUTH_SCOPE,
|
||||
}).encode(),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
|
||||
verification_url = traverse_obj(code_response, ('verification_url', {str}))
|
||||
user_code = traverse_obj(code_response, ('user_code', {str}))
|
||||
if not verification_url or not user_code:
|
||||
if username.startswith('oauth'):
|
||||
raise ExtractorError(
|
||||
'Authorization server did not provide verification_url or user_code', video_id=self._OAUTH_DISPLAY_ID)
|
||||
f'Login with OAuth is no longer supported. {self._youtube_login_hint}', expected=True)
|
||||
|
||||
# note: The whitespace is intentional
|
||||
self.to_screen(
|
||||
f'{self._OAUTH_DISPLAY_ID}: To give yt-dlp access to your account, '
|
||||
f'go to {verification_url} and enter code {user_code}')
|
||||
|
||||
# RFC8628 § 3.5: default poll interval is 5 seconds if not provided
|
||||
poll_interval = traverse_obj(code_response, ('interval', {int}), default=5)
|
||||
|
||||
for retry in self.RetryManager():
|
||||
while True:
|
||||
try:
|
||||
token_response = self._download_json(
|
||||
self._OAUTH_TOKEN_ENDPOINT,
|
||||
video_id=self._OAUTH_DISPLAY_ID,
|
||||
note=False,
|
||||
errnote='Failed to request access token',
|
||||
data=json.dumps({
|
||||
'client_id': self._OAUTH_CLIENT_ID,
|
||||
'client_secret': self._OAUTH_CLIENT_SECRET,
|
||||
'device_code': code_response['device_code'],
|
||||
'grant_type': 'urn:ietf:params:oauth:grant-type:device_code',
|
||||
}).encode(),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, TransportError):
|
||||
retry.error = e
|
||||
break
|
||||
elif isinstance(e.cause, HTTPError):
|
||||
error = self._read_oauth_error_response(e.cause.response)
|
||||
if not error:
|
||||
retry.error = e
|
||||
break
|
||||
|
||||
if error == 'authorization_pending':
|
||||
time.sleep(poll_interval)
|
||||
continue
|
||||
elif error == 'expired_token':
|
||||
raise ExtractorError(
|
||||
'Authorization timed out', expected=True, video_id=self._OAUTH_DISPLAY_ID)
|
||||
elif error == 'access_denied':
|
||||
raise ExtractorError(
|
||||
'You denied access to an account', expected=True, video_id=self._OAUTH_DISPLAY_ID)
|
||||
elif error == 'slow_down':
|
||||
# RFC8628 § 3.5: add 5 seconds to the poll interval
|
||||
poll_interval += 5
|
||||
time.sleep(poll_interval)
|
||||
continue
|
||||
else:
|
||||
raise ExtractorError(
|
||||
f'Authorization server returned an error when fetching access token: {error}',
|
||||
video_id=self._OAUTH_DISPLAY_ID)
|
||||
raise
|
||||
|
||||
return token_response
|
||||
|
||||
def _update_oauth(self):
|
||||
token = YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE.get(self._OAUTH_PROFILE)
|
||||
if token is None or token['expiry'] > time.time():
|
||||
return
|
||||
|
||||
self._set_oauth_info(self._refresh_token(token['refresh_token']))
|
||||
self.report_warning(
|
||||
f'Login with password is not supported for YouTube. {self._youtube_login_hint}')
|
||||
|
||||
@property
|
||||
def _youtube_login_hint(self):
|
||||
return ('Use --username=oauth[+PROFILE] --password="" to log in using oauth, '
|
||||
f'or else u{self._login_hint(method="cookies")[1:]}. '
|
||||
'See https://github.com/yt-dlp/yt-dlp/wiki/Extractors#logging-in-with-oauth for more on how to use oauth. '
|
||||
'See https://github.com/yt-dlp/yt-dlp/wiki/Extractors#exporting-youtube-cookies for help with cookies')
|
||||
return (f'{self._login_hint(method="cookies")}. Also see '
|
||||
'https://github.com/yt-dlp/yt-dlp/wiki/Extractors#exporting-youtube-cookies '
|
||||
'for tips on effectively exporting YouTube cookies')
|
||||
|
||||
def _check_login_required(self):
|
||||
if self._LOGIN_REQUIRED and not self.is_authenticated:
|
||||
|
@ -928,7 +734,7 @@ def _extract_visitor_data(self, *args):
|
|||
|
||||
@functools.cached_property
|
||||
def is_authenticated(self):
|
||||
return self._OAUTH_PROFILE or bool(self._generate_sapisidhash_header())
|
||||
return bool(self._generate_sapisidhash_header())
|
||||
|
||||
def extract_ytcfg(self, video_id, webpage):
|
||||
if not webpage:
|
||||
|
@ -938,16 +744,6 @@ def extract_ytcfg(self, video_id, webpage):
|
|||
r'ytcfg\.set\s*\(\s*({.+?})\s*\)\s*;', webpage, 'ytcfg',
|
||||
default='{}'), video_id, fatal=False) or {}
|
||||
|
||||
def _generate_oauth_headers(self):
|
||||
self._update_oauth()
|
||||
oauth_token = YoutubeBaseInfoExtractor._OAUTH_ACCESS_TOKEN_CACHE.get(self._OAUTH_PROFILE)
|
||||
if not oauth_token:
|
||||
return {}
|
||||
|
||||
return {
|
||||
'Authorization': f'{oauth_token["token_type"]} {oauth_token["access_token"]}',
|
||||
}
|
||||
|
||||
def _generate_cookie_auth_headers(self, *, ytcfg=None, account_syncid=None, session_index=None, origin=None, **kwargs):
|
||||
headers = {}
|
||||
account_syncid = account_syncid or self._extract_account_syncid(ytcfg)
|
||||
|
@ -977,14 +773,10 @@ def generate_api_headers(
|
|||
'Origin': origin,
|
||||
'X-Goog-Visitor-Id': visitor_data or self._extract_visitor_data(ytcfg),
|
||||
'User-Agent': self._ytcfg_get_safe(ytcfg, lambda x: x['INNERTUBE_CONTEXT']['client']['userAgent'], default_client=default_client),
|
||||
**self._generate_oauth_headers(),
|
||||
**self._generate_cookie_auth_headers(ytcfg=ytcfg, account_syncid=account_syncid, session_index=session_index, origin=origin),
|
||||
}
|
||||
return filter_dict(headers)
|
||||
|
||||
def _generate_webpage_headers(self):
|
||||
return self._generate_oauth_headers()
|
||||
|
||||
def _download_ytcfg(self, client, video_id):
|
||||
url = {
|
||||
'web': 'https://www.youtube.com',
|
||||
|
@ -994,8 +786,7 @@ def _download_ytcfg(self, client, video_id):
|
|||
if not url:
|
||||
return {}
|
||||
webpage = self._download_webpage(
|
||||
url, video_id, fatal=False, note=f'Downloading {client.replace("_", " ").strip()} client config',
|
||||
headers=self._generate_webpage_headers())
|
||||
url, video_id, fatal=False, note=f'Downloading {client.replace("_", " ").strip()} client config')
|
||||
return self.extract_ytcfg(video_id, webpage) or {}
|
||||
|
||||
@staticmethod
|
||||
|
@ -3260,8 +3051,7 @@ def _load_player(self, video_id, player_url, fatal=True):
|
|||
code = self._download_webpage(
|
||||
player_url, video_id, fatal=fatal,
|
||||
note='Downloading player ' + player_id,
|
||||
errnote=f'Download of {player_url} failed',
|
||||
headers=self._generate_webpage_headers())
|
||||
errnote=f'Download of {player_url} failed')
|
||||
if code:
|
||||
self._code_cache[player_id] = code
|
||||
return self._code_cache.get(player_id)
|
||||
|
@ -3544,8 +3334,7 @@ def _mark_watched(self, video_id, player_responses):
|
|||
|
||||
self._download_webpage(
|
||||
url, video_id, f'Marking {label}watched',
|
||||
'Unable to mark watched', fatal=False,
|
||||
headers=self._generate_webpage_headers())
|
||||
'Unable to mark watched', fatal=False)
|
||||
|
||||
@classmethod
|
||||
def _extract_from_webpage(cls, url, webpage):
|
||||
|
@ -4059,9 +3848,10 @@ def _get_requested_clients(self, url, smuggled_data):
|
|||
if smuggled_data.get('is_music_url') or self.is_music_url(url):
|
||||
for requested_client in requested_clients:
|
||||
_, base_client, variant = _split_innertube_client(requested_client)
|
||||
music_client = f'{base_client}_music'
|
||||
music_client = f'{base_client}_music' if base_client != 'mweb' else 'web_music'
|
||||
if variant != 'music' and music_client in INNERTUBE_CLIENTS:
|
||||
requested_clients.append(music_client)
|
||||
if not INNERTUBE_CLIENTS[music_client]['REQUIRE_AUTH'] or self.is_authenticated:
|
||||
requested_clients.append(music_client)
|
||||
|
||||
return orderedSet(requested_clients)
|
||||
|
||||
|
@ -4174,10 +3964,10 @@ def append_client(*client_names):
|
|||
self.to_screen(
|
||||
f'{video_id}: This video is age-restricted and YouTube is requiring '
|
||||
'account age-verification; some formats may be missing', only_once=True)
|
||||
# web_creator and mediaconnect can work around the age-verification requirement
|
||||
# _testsuite & _vr variants can also work around age-verification
|
||||
# web_creator can work around the age-verification requirement
|
||||
# android_vr and mediaconnect may also be able to work around age-verification
|
||||
# tv_embedded may(?) still work around age-verification if the video is embeddable
|
||||
append_client('web_creator', 'mediaconnect')
|
||||
append_client('web_creator')
|
||||
|
||||
prs.extend(deprioritized_prs)
|
||||
|
||||
|
@ -4526,7 +4316,7 @@ def _download_player_responses(self, url, smuggled_data, video_id, webpage_url):
|
|||
if pp:
|
||||
query['pp'] = pp
|
||||
webpage = self._download_webpage(
|
||||
webpage_url, video_id, fatal=False, query=query, headers=self._generate_webpage_headers())
|
||||
webpage_url, video_id, fatal=False, query=query)
|
||||
|
||||
master_ytcfg = self.extract_ytcfg(video_id, webpage) or self._get_default_ytcfg()
|
||||
|
||||
|
@ -4669,6 +4459,9 @@ def feed_entry(name):
|
|||
self.raise_geo_restricted(subreason, countries, metadata_available=True)
|
||||
reason += f'. {subreason}'
|
||||
if reason:
|
||||
if 'sign in' in reason.lower():
|
||||
reason = remove_end(reason, 'This helps protect our community. Learn more')
|
||||
reason = f'{remove_end(reason.strip(), ".")}. {self._youtube_login_hint}'
|
||||
self.raise_no_formats(reason, expected=True)
|
||||
|
||||
keywords = get_first(video_details, 'keywords', expected_type=list) or []
|
||||
|
@ -5294,7 +5087,7 @@ def _playlist_entries(self, video_list_renderer):
|
|||
def _rich_entries(self, rich_grid_renderer):
|
||||
renderer = traverse_obj(
|
||||
rich_grid_renderer,
|
||||
('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer', 'shortsLockupViewModel'), any)) or {}
|
||||
('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer', 'shortsLockupViewModel', 'lockupViewModel'), any)) or {}
|
||||
video_id = renderer.get('videoId')
|
||||
if video_id:
|
||||
yield self._extract_video(renderer)
|
||||
|
@ -5321,6 +5114,18 @@ def _rich_entries(self, rich_grid_renderer):
|
|||
})),
|
||||
thumbnails=self._extract_thumbnails(renderer, 'thumbnail', final_key='sources'))
|
||||
return
|
||||
# lockupViewModel extraction
|
||||
content_id = renderer.get('contentId')
|
||||
if content_id and renderer.get('contentType') == 'LOCKUP_CONTENT_TYPE_PODCAST':
|
||||
yield self.url_result(
|
||||
f'https://www.youtube.com/playlist?list={content_id}',
|
||||
ie=YoutubeTabIE, video_id=content_id,
|
||||
**traverse_obj(renderer, {
|
||||
'title': ('metadata', 'lockupMetadataViewModel', 'title', 'content', {str}),
|
||||
}),
|
||||
thumbnails=self._extract_thumbnails(renderer, (
|
||||
'contentImage', 'collectionThumbnailViewModel', 'primaryThumbnail', 'thumbnailViewModel', 'image'), final_key='sources'))
|
||||
return
|
||||
|
||||
def _video_entry(self, video_renderer):
|
||||
video_id = video_renderer.get('videoId')
|
||||
|
@ -5814,7 +5619,7 @@ def _extract_webpage(self, url, item_id, fatal=True):
|
|||
webpage, data = None, None
|
||||
for retry in self.RetryManager(fatal=fatal):
|
||||
try:
|
||||
webpage = self._download_webpage(url, item_id, note='Downloading webpage', headers=self._generate_webpage_headers())
|
||||
webpage = self._download_webpage(url, item_id, note='Downloading webpage')
|
||||
data = self.extract_yt_initial_data(item_id, webpage or '', fatal=fatal) or {}
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, network_exceptions):
|
||||
|
@ -6913,22 +6718,22 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
|
|||
},
|
||||
'playlist_count': 0,
|
||||
}, {
|
||||
# Podcasts tab, with rich entry playlistRenderers
|
||||
# Podcasts tab, with rich entry lockupViewModel
|
||||
'url': 'https://www.youtube.com/@99percentinvisiblepodcast/podcasts',
|
||||
'info_dict': {
|
||||
'id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw',
|
||||
'channel_id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw',
|
||||
'uploader_url': 'https://www.youtube.com/@99percentinvisiblepodcast',
|
||||
'description': 'md5:3a0ed38f1ad42a68ef0428c04a15695c',
|
||||
'title': '99 Percent Invisible - Podcasts',
|
||||
'uploader': '99 Percent Invisible',
|
||||
'title': '99% Invisible - Podcasts',
|
||||
'uploader': '99% Invisible',
|
||||
'channel_follower_count': int,
|
||||
'channel_url': 'https://www.youtube.com/channel/UCVMF2HD4ZgC0QHpU9Yq5Xrw',
|
||||
'tags': [],
|
||||
'channel': '99 Percent Invisible',
|
||||
'channel': '99% Invisible',
|
||||
'uploader_id': '@99percentinvisiblepodcast',
|
||||
},
|
||||
'playlist_count': 0,
|
||||
'playlist_count': 5,
|
||||
}, {
|
||||
# Releases tab, with rich entry playlistRenderers (same as Podcasts tab)
|
||||
'url': 'https://www.youtube.com/@AHimitsu/releases',
|
||||
|
|
|
@ -419,7 +419,9 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
|
|||
general.add_option(
|
||||
'--flat-playlist',
|
||||
action='store_const', dest='extract_flat', const='in_playlist', default=False,
|
||||
help='Do not extract the videos of a playlist, only list them')
|
||||
help=(
|
||||
'Do not extract a playlist\'s URL result entries; '
|
||||
'some entry metadata may be missing and downloading may be bypassed'))
|
||||
general.add_option(
|
||||
'--no-flat-playlist',
|
||||
action='store_false', dest='extract_flat',
|
||||
|
|
|
@ -9,7 +9,6 @@
|
|||
RetryManager,
|
||||
_configuration_args,
|
||||
deprecation_warning,
|
||||
encodeFilename,
|
||||
)
|
||||
|
||||
|
||||
|
@ -151,7 +150,7 @@ def run(self, information):
|
|||
|
||||
def try_utime(self, path, atime, mtime, errnote='Cannot update utime of file'):
|
||||
try:
|
||||
os.utime(encodeFilename(path), (atime, mtime))
|
||||
os.utime(path, (atime, mtime))
|
||||
except Exception:
|
||||
self.report_warning(errnote)
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
PostProcessingError,
|
||||
check_executable,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
prepend_extension,
|
||||
shell_quote,
|
||||
)
|
||||
|
@ -68,7 +67,7 @@ def run(self, info):
|
|||
self.to_screen('There are no thumbnails on disk')
|
||||
return [], info
|
||||
thumbnail_filename = info['thumbnails'][idx]['filepath']
|
||||
if not os.path.exists(encodeFilename(thumbnail_filename)):
|
||||
if not os.path.exists(thumbnail_filename):
|
||||
self.report_warning('Skipping embedding the thumbnail because the file is missing.')
|
||||
return [], info
|
||||
|
||||
|
@ -85,7 +84,7 @@ def run(self, info):
|
|||
thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')
|
||||
thumbnail_ext = 'png'
|
||||
|
||||
mtime = os.stat(encodeFilename(filename)).st_mtime
|
||||
mtime = os.stat(filename).st_mtime
|
||||
|
||||
success = True
|
||||
if info['ext'] == 'mp3':
|
||||
|
@ -154,12 +153,12 @@ def run(self, info):
|
|||
else:
|
||||
if not prefer_atomicparsley:
|
||||
self.to_screen('mutagen was not found. Falling back to AtomicParsley')
|
||||
cmd = [encodeFilename(atomicparsley, True),
|
||||
encodeFilename(filename, True),
|
||||
cmd = [atomicparsley,
|
||||
filename,
|
||||
encodeArgument('--artwork'),
|
||||
encodeFilename(thumbnail_filename, True),
|
||||
thumbnail_filename,
|
||||
encodeArgument('-o'),
|
||||
encodeFilename(temp_filename, True)]
|
||||
temp_filename]
|
||||
cmd += [encodeArgument(o) for o in self._configuration_args('AtomicParsley')]
|
||||
|
||||
self._report_run('atomicparsley', filename)
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
determine_ext,
|
||||
dfxp2srt,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
filter_dict,
|
||||
float_or_none,
|
||||
is_outdated_version,
|
||||
|
@ -243,13 +242,13 @@ def get_audio_codec(self, path):
|
|||
try:
|
||||
if self.probe_available:
|
||||
cmd = [
|
||||
encodeFilename(self.probe_executable, True),
|
||||
self.probe_executable,
|
||||
encodeArgument('-show_streams')]
|
||||
else:
|
||||
cmd = [
|
||||
encodeFilename(self.executable, True),
|
||||
self.executable,
|
||||
encodeArgument('-i')]
|
||||
cmd.append(encodeFilename(self._ffmpeg_filename_argument(path), True))
|
||||
cmd.append(self._ffmpeg_filename_argument(path))
|
||||
self.write_debug(f'{self.basename} command line: {shell_quote(cmd)}')
|
||||
stdout, stderr, returncode = Popen.run(
|
||||
cmd, text=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
|
@ -282,7 +281,7 @@ def get_metadata_object(self, path, opts=[]):
|
|||
self.check_version()
|
||||
|
||||
cmd = [
|
||||
encodeFilename(self.probe_executable, True),
|
||||
self.probe_executable,
|
||||
encodeArgument('-hide_banner'),
|
||||
encodeArgument('-show_format'),
|
||||
encodeArgument('-show_streams'),
|
||||
|
@ -335,9 +334,9 @@ def real_run_ffmpeg(self, input_path_opts, output_path_opts, *, expected_retcode
|
|||
self.check_version()
|
||||
|
||||
oldest_mtime = min(
|
||||
os.stat(encodeFilename(path)).st_mtime for path, _ in input_path_opts if path)
|
||||
os.stat(path).st_mtime for path, _ in input_path_opts if path)
|
||||
|
||||
cmd = [encodeFilename(self.executable, True), encodeArgument('-y')]
|
||||
cmd = [self.executable, encodeArgument('-y')]
|
||||
# avconv does not have repeat option
|
||||
if self.basename == 'ffmpeg':
|
||||
cmd += [encodeArgument('-loglevel'), encodeArgument('repeat+info')]
|
||||
|
@ -353,7 +352,7 @@ def make_args(file, args, name, number):
|
|||
args.append('-i')
|
||||
return (
|
||||
[encodeArgument(arg) for arg in args]
|
||||
+ [encodeFilename(self._ffmpeg_filename_argument(file), True)])
|
||||
+ [self._ffmpeg_filename_argument(file)])
|
||||
|
||||
for arg_type, path_opts in (('i', input_path_opts), ('o', output_path_opts)):
|
||||
cmd += itertools.chain.from_iterable(
|
||||
|
@ -522,8 +521,8 @@ def run(self, information):
|
|||
return [], information
|
||||
orig_path = prepend_extension(path, 'orig')
|
||||
temp_path = prepend_extension(path, 'temp')
|
||||
if (self._nopostoverwrites and os.path.exists(encodeFilename(new_path))
|
||||
and os.path.exists(encodeFilename(orig_path))):
|
||||
if (self._nopostoverwrites and os.path.exists(new_path)
|
||||
and os.path.exists(orig_path)):
|
||||
self.to_screen(f'Post-process file {new_path} exists, skipping')
|
||||
return [], information
|
||||
|
||||
|
@ -838,7 +837,7 @@ def run(self, info):
|
|||
args.extend(['-map', f'{i}:v:0'])
|
||||
self.to_screen(f'Merging formats into "{filename}"')
|
||||
self.run_ffmpeg_multiple_files(info['__files_to_merge'], temp_filename, args)
|
||||
os.rename(encodeFilename(temp_filename), encodeFilename(filename))
|
||||
os.rename(temp_filename, filename)
|
||||
return info['__files_to_merge'], info
|
||||
|
||||
def can_merge(self):
|
||||
|
@ -1039,7 +1038,7 @@ def _prepare_filename(self, number, chapter, info):
|
|||
|
||||
def _ffmpeg_args_for_chapter(self, number, chapter, info):
|
||||
destination = self._prepare_filename(number, chapter, info)
|
||||
if not self._downloader._ensure_dir_exists(encodeFilename(destination)):
|
||||
if not self._downloader._ensure_dir_exists(destination):
|
||||
return
|
||||
|
||||
chapter['filepath'] = destination
|
||||
|
|
|
@ -4,8 +4,6 @@
|
|||
from ..compat import shutil
|
||||
from ..utils import (
|
||||
PostProcessingError,
|
||||
decodeFilename,
|
||||
encodeFilename,
|
||||
make_dir,
|
||||
)
|
||||
|
||||
|
@ -21,25 +19,25 @@ def pp_key(cls):
|
|||
return 'MoveFiles'
|
||||
|
||||
def run(self, info):
|
||||
dl_path, dl_name = os.path.split(encodeFilename(info['filepath']))
|
||||
dl_path, dl_name = os.path.split(info['filepath'])
|
||||
finaldir = info.get('__finaldir', dl_path)
|
||||
finalpath = os.path.join(finaldir, dl_name)
|
||||
if self._downloaded:
|
||||
info['__files_to_move'][info['filepath']] = decodeFilename(finalpath)
|
||||
info['__files_to_move'][info['filepath']] = finalpath
|
||||
|
||||
make_newfilename = lambda old: decodeFilename(os.path.join(finaldir, os.path.basename(encodeFilename(old))))
|
||||
make_newfilename = lambda old: os.path.join(finaldir, os.path.basename(old))
|
||||
for oldfile, newfile in info['__files_to_move'].items():
|
||||
if not newfile:
|
||||
newfile = make_newfilename(oldfile)
|
||||
if os.path.abspath(encodeFilename(oldfile)) == os.path.abspath(encodeFilename(newfile)):
|
||||
if os.path.abspath(oldfile) == os.path.abspath(newfile):
|
||||
continue
|
||||
if not os.path.exists(encodeFilename(oldfile)):
|
||||
if not os.path.exists(oldfile):
|
||||
self.report_warning(f'File "{oldfile}" cannot be found')
|
||||
continue
|
||||
if os.path.exists(encodeFilename(newfile)):
|
||||
if os.path.exists(newfile):
|
||||
if self.get_param('overwrites', True):
|
||||
self.report_warning(f'Replacing existing file "{newfile}"')
|
||||
os.remove(encodeFilename(newfile))
|
||||
os.remove(newfile)
|
||||
else:
|
||||
self.report_warning(
|
||||
f'Cannot move file "{oldfile}" out of temporary directory since "{newfile}" already exists. ')
|
||||
|
|
|
@ -9,7 +9,6 @@
|
|||
check_executable,
|
||||
cli_option,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
prepend_extension,
|
||||
shell_quote,
|
||||
str_or_none,
|
||||
|
@ -52,7 +51,7 @@ def run(self, information):
|
|||
return [], information
|
||||
|
||||
filename = information['filepath']
|
||||
if not os.path.exists(encodeFilename(filename)): # no download
|
||||
if not os.path.exists(filename): # no download
|
||||
return [], information
|
||||
|
||||
if information['extractor_key'].lower() != 'youtube':
|
||||
|
@ -71,8 +70,8 @@ def run(self, information):
|
|||
self.report_warning('If sponskrub is run multiple times, unintended parts of the video could be cut out.')
|
||||
|
||||
temp_filename = prepend_extension(filename, self._temp_ext)
|
||||
if os.path.exists(encodeFilename(temp_filename)):
|
||||
os.remove(encodeFilename(temp_filename))
|
||||
if os.path.exists(temp_filename):
|
||||
os.remove(temp_filename)
|
||||
|
||||
cmd = [self.path]
|
||||
if not self.cutout:
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
|
||||
from .common import PostProcessor
|
||||
from ..compat import compat_os_name
|
||||
from ..utils import (
|
||||
PostProcessingError,
|
||||
XAttrMetadataError,
|
||||
|
@ -57,7 +56,7 @@ def run(self, info):
|
|||
elif e.reason == 'VALUE_TOO_LONG':
|
||||
self.report_warning(f'Unable to write extended attribute "{xattrname}" due to too long values.')
|
||||
else:
|
||||
tip = ('You need to use NTFS' if compat_os_name == 'nt'
|
||||
tip = ('You need to use NTFS' if os.name == 'nt'
|
||||
else 'You may have to enable them in your "/etc/fstab"')
|
||||
raise PostProcessingError(f'This filesystem doesn\'t support extended attributes. {tip}')
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
from dataclasses import dataclass
|
||||
from zipimport import zipimporter
|
||||
|
||||
from .compat import compat_realpath
|
||||
from .networking import Request
|
||||
from .networking.exceptions import HTTPError, network_exceptions
|
||||
from .utils import (
|
||||
|
@ -201,8 +200,6 @@ class UpdateInfo:
|
|||
binary_name: str | None = _get_binary_name() # noqa: RUF009: Always returns the same value
|
||||
checksum: str | None = None
|
||||
|
||||
_has_update = True
|
||||
|
||||
|
||||
class Updater:
|
||||
# XXX: use class variables to simplify testing
|
||||
|
@ -523,7 +520,7 @@ def update(self, update_info=NO_DEFAULT):
|
|||
@functools.cached_property
|
||||
def filename(self):
|
||||
"""Filename of the executable"""
|
||||
return compat_realpath(_get_variant_and_executable_path()[1])
|
||||
return os.path.realpath(_get_variant_and_executable_path()[1])
|
||||
|
||||
@functools.cached_property
|
||||
def cmd(self):
|
||||
|
@ -562,62 +559,14 @@ def _report_network_error(self, action, delim=';', tag=None):
|
|||
f'Unable to {action}{delim} visit '
|
||||
f'https://github.com/{self.requested_repo}/releases/{path}', True)
|
||||
|
||||
# XXX: Everything below this line in this class is deprecated / for compat only
|
||||
@property
|
||||
def _target_tag(self):
|
||||
"""Deprecated; requested tag with 'tags/' prepended when necessary for API calls"""
|
||||
return f'tags/{self.requested_tag}' if self.requested_tag != 'latest' else self.requested_tag
|
||||
|
||||
def _check_update(self):
|
||||
"""Deprecated; report whether there is an update available"""
|
||||
return bool(self.query_update(_output=True))
|
||||
|
||||
def __getattr__(self, attribute: str):
|
||||
"""Compat getter function for deprecated attributes"""
|
||||
deprecated_props_map = {
|
||||
'check_update': '_check_update',
|
||||
'target_tag': '_target_tag',
|
||||
'target_channel': 'requested_channel',
|
||||
}
|
||||
update_info_props_map = {
|
||||
'has_update': '_has_update',
|
||||
'new_version': 'version',
|
||||
'latest_version': 'requested_version',
|
||||
'release_name': 'binary_name',
|
||||
'release_hash': 'checksum',
|
||||
}
|
||||
|
||||
if attribute not in deprecated_props_map and attribute not in update_info_props_map:
|
||||
raise AttributeError(f'{type(self).__name__!r} object has no attribute {attribute!r}')
|
||||
|
||||
msg = f'{type(self).__name__}.{attribute} is deprecated and will be removed in a future version'
|
||||
if attribute in deprecated_props_map:
|
||||
source_name = deprecated_props_map[attribute]
|
||||
if not source_name.startswith('_'):
|
||||
msg += f'. Please use {source_name!r} instead'
|
||||
source = self
|
||||
mapping = deprecated_props_map
|
||||
|
||||
else: # attribute in update_info_props_map
|
||||
msg += '. Please call query_update() instead'
|
||||
source = self.query_update()
|
||||
if source is None:
|
||||
source = UpdateInfo('', None, None, None)
|
||||
source._has_update = False
|
||||
mapping = update_info_props_map
|
||||
|
||||
deprecation_warning(msg)
|
||||
for target_name, source_name in mapping.items():
|
||||
value = getattr(source, source_name)
|
||||
setattr(self, target_name, value)
|
||||
|
||||
return getattr(self, attribute)
|
||||
|
||||
|
||||
def run_update(ydl):
|
||||
"""Update the program file with the latest version from the repository
|
||||
@returns Whether there was a successful update (No update = False)
|
||||
"""
|
||||
deprecation_warning(
|
||||
'"yt_dlp.update.run_update(ydl)" is deprecated and may be removed in a future version. '
|
||||
'Use "yt_dlp.update.Updater(ydl).update()" instead')
|
||||
return Updater(ydl).update()
|
||||
|
||||
|
||||
|
|
|
@ -9,31 +9,23 @@
|
|||
del passthrough_module
|
||||
|
||||
|
||||
from ._utils import preferredencoding
|
||||
import re
|
||||
import struct
|
||||
|
||||
|
||||
def encodeFilename(s, for_subprocess=False):
|
||||
assert isinstance(s, str)
|
||||
return s
|
||||
def bytes_to_intlist(bs):
|
||||
if not bs:
|
||||
return []
|
||||
if isinstance(bs[0], int): # Python 3
|
||||
return list(bs)
|
||||
else:
|
||||
return [ord(c) for c in bs]
|
||||
|
||||
|
||||
def decodeFilename(b, for_subprocess=False):
|
||||
return b
|
||||
def intlist_to_bytes(xs):
|
||||
if not xs:
|
||||
return b''
|
||||
return struct.pack('%dB' % len(xs), *xs)
|
||||
|
||||
|
||||
def decodeArgument(b):
|
||||
return b
|
||||
|
||||
|
||||
def decodeOption(optval):
|
||||
if optval is None:
|
||||
return optval
|
||||
if isinstance(optval, bytes):
|
||||
optval = optval.decode(preferredencoding())
|
||||
|
||||
assert isinstance(optval, str)
|
||||
return optval
|
||||
|
||||
|
||||
def error_to_compat_str(err):
|
||||
return str(err)
|
||||
compiled_regex_type = type(re.compile(''))
|
||||
|
|
|
@ -313,3 +313,30 @@ def make_HTTPS_handler(params, **kwargs):
|
|||
|
||||
def process_communicate_or_kill(p, *args, **kwargs):
|
||||
return Popen.communicate_or_kill(p, *args, **kwargs)
|
||||
|
||||
|
||||
def encodeFilename(s, for_subprocess=False):
|
||||
assert isinstance(s, str)
|
||||
return s
|
||||
|
||||
|
||||
def decodeFilename(b, for_subprocess=False):
|
||||
return b
|
||||
|
||||
|
||||
def decodeArgument(b):
|
||||
return b
|
||||
|
||||
|
||||
def decodeOption(optval):
|
||||
if optval is None:
|
||||
return optval
|
||||
if isinstance(optval, bytes):
|
||||
optval = optval.decode(preferredencoding())
|
||||
|
||||
assert isinstance(optval, str)
|
||||
return optval
|
||||
|
||||
|
||||
def error_to_compat_str(err):
|
||||
return str(err)
|
||||
|
|
|
@ -49,15 +49,11 @@
|
|||
compat_etree_fromstring,
|
||||
compat_expanduser,
|
||||
compat_HTMLParseError,
|
||||
compat_os_name,
|
||||
)
|
||||
from ..dependencies import xattr
|
||||
|
||||
__name__ = __name__.rsplit('.', 1)[0] # noqa: A001: Pretend to be the parent module
|
||||
|
||||
# This is not clearly defined otherwise
|
||||
compiled_regex_type = type(re.compile(''))
|
||||
|
||||
|
||||
class NO_DEFAULT:
|
||||
pass
|
||||
|
@ -216,7 +212,7 @@ def partial_application(func):
|
|||
sig = inspect.signature(func)
|
||||
required_args = [
|
||||
param.name for param in sig.parameters.values()
|
||||
if param.kind in (inspect.Parameter.POSITIONAL_ONLY, inspect.Parameter.POSITIONAL_OR_KEYWORD, inspect.Parameter.VAR_POSITIONAL)
|
||||
if param.kind in (inspect.Parameter.POSITIONAL_ONLY, inspect.Parameter.POSITIONAL_OR_KEYWORD)
|
||||
if param.default is inspect.Parameter.empty
|
||||
]
|
||||
|
||||
|
@ -874,7 +870,7 @@ def __init__(self, args, *remaining, env=None, text=False, shell=False, **kwargs
|
|||
kwargs.setdefault('encoding', 'utf-8')
|
||||
kwargs.setdefault('errors', 'replace')
|
||||
|
||||
if shell and compat_os_name == 'nt' and kwargs.get('executable') is None:
|
||||
if shell and os.name == 'nt' and kwargs.get('executable') is None:
|
||||
if not isinstance(args, str):
|
||||
args = shell_quote(args, shell=True)
|
||||
shell = False
|
||||
|
@ -1457,7 +1453,7 @@ def system_identifier():
|
|||
@functools.cache
|
||||
def get_windows_version():
|
||||
""" Get Windows version. returns () if it's not running on Windows """
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
return version_tuple(platform.win32_ver()[1])
|
||||
else:
|
||||
return ()
|
||||
|
@ -1470,7 +1466,7 @@ def write_string(s, out=None, encoding=None):
|
|||
if not out:
|
||||
return
|
||||
|
||||
if compat_os_name == 'nt' and supports_terminal_sequences(out):
|
||||
if os.name == 'nt' and supports_terminal_sequences(out):
|
||||
s = re.sub(r'([\r\n]+)', r' \1', s)
|
||||
|
||||
enc, buffer = None, out
|
||||
|
@ -1503,21 +1499,6 @@ def deprecation_warning(msg, *, printer=None, stacklevel=0, **kwargs):
|
|||
deprecation_warning._cache = set()
|
||||
|
||||
|
||||
def bytes_to_intlist(bs):
|
||||
if not bs:
|
||||
return []
|
||||
if isinstance(bs[0], int): # Python 3
|
||||
return list(bs)
|
||||
else:
|
||||
return [ord(c) for c in bs]
|
||||
|
||||
|
||||
def intlist_to_bytes(xs):
|
||||
if not xs:
|
||||
return b''
|
||||
return struct.pack('%dB' % len(xs), *xs)
|
||||
|
||||
|
||||
class LockingUnsupportedError(OSError):
|
||||
msg = 'File locking is not supported'
|
||||
|
||||
|
@ -1701,7 +1682,7 @@ def get_filesystem_encoding():
|
|||
def shell_quote(args, *, shell=False):
|
||||
args = list(variadic(args))
|
||||
|
||||
if compat_os_name != 'nt':
|
||||
if os.name != 'nt':
|
||||
return shlex.join(args)
|
||||
|
||||
trans = _CMD_QUOTE_TRANS if shell else _WINDOWS_QUOTE_TRANS
|
||||
|
@ -4516,7 +4497,7 @@ def urshift(val, n):
|
|||
def write_xattr(path, key, value):
|
||||
# Windows: Write xattrs to NTFS Alternate Data Streams:
|
||||
# http://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
assert ':' not in key
|
||||
assert os.path.exists(path)
|
||||
|
||||
|
@ -4778,12 +4759,12 @@ def jwt_decode_hs256(jwt):
|
|||
return json.loads(base64.urlsafe_b64decode(f'{payload_b64}==='))
|
||||
|
||||
|
||||
WINDOWS_VT_MODE = False if compat_os_name == 'nt' else None
|
||||
WINDOWS_VT_MODE = False if os.name == 'nt' else None
|
||||
|
||||
|
||||
@functools.cache
|
||||
def supports_terminal_sequences(stream):
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
if not WINDOWS_VT_MODE:
|
||||
return False
|
||||
elif not os.getenv('TERM'):
|
||||
|
@ -4837,7 +4818,6 @@ def number_of_digits(number):
|
|||
return len('%d' % number)
|
||||
|
||||
|
||||
@partial_application
|
||||
def join_nonempty(*values, delim='-', from_dict=None):
|
||||
if from_dict is not None:
|
||||
values = (traversal.traverse_obj(from_dict, variadic(v)) for v in values)
|
||||
|
@ -4878,7 +4858,7 @@ def parse_http_range(range):
|
|||
|
||||
def read_stdin(what):
|
||||
if what:
|
||||
eof = 'Ctrl+Z' if compat_os_name == 'nt' else 'Ctrl+D'
|
||||
eof = 'Ctrl+Z' if os.name == 'nt' else 'Ctrl+D'
|
||||
write_string(f'Reading {what} from STDIN - EOF ({eof}) to end:\n')
|
||||
return sys.stdin
|
||||
|
||||
|
|
|
@ -332,14 +332,14 @@ class _RequiredError(ExtractorError):
|
|||
|
||||
|
||||
@typing.overload
|
||||
def subs_list_to_dict(*, ext: str | None = None) -> collections.abc.Callable[[list[dict]], dict[str, list[dict]]]: ...
|
||||
def subs_list_to_dict(*, lang: str | None = 'und', ext: str | None = None) -> collections.abc.Callable[[list[dict]], dict[str, list[dict]]]: ...
|
||||
|
||||
|
||||
@typing.overload
|
||||
def subs_list_to_dict(subs: list[dict] | None, /, *, ext: str | None = None) -> dict[str, list[dict]]: ...
|
||||
def subs_list_to_dict(subs: list[dict] | None, /, *, lang: str | None = 'und', ext: str | None = None) -> dict[str, list[dict]]: ...
|
||||
|
||||
|
||||
def subs_list_to_dict(subs: list[dict] | None = None, /, *, ext=None):
|
||||
def subs_list_to_dict(subs: list[dict] | None = None, /, *, lang='und', ext=None):
|
||||
"""
|
||||
Convert subtitles from a traversal into a subtitle dict.
|
||||
The path should have an `all` immediately before this function.
|
||||
|
@ -352,7 +352,7 @@ def subs_list_to_dict(subs: list[dict] | None = None, /, *, ext=None):
|
|||
`quality` The sort order for each subtitle
|
||||
"""
|
||||
if subs is None:
|
||||
return functools.partial(subs_list_to_dict, ext=ext)
|
||||
return functools.partial(subs_list_to_dict, lang=lang, ext=ext)
|
||||
|
||||
result = collections.defaultdict(list)
|
||||
|
||||
|
@ -360,10 +360,16 @@ def subs_list_to_dict(subs: list[dict] | None = None, /, *, ext=None):
|
|||
if not url_or_none(sub.get('url')) and not sub.get('data'):
|
||||
continue
|
||||
sub_id = sub.pop('id', None)
|
||||
if sub_id is None:
|
||||
continue
|
||||
if ext is not None and not sub.get('ext'):
|
||||
sub['ext'] = ext
|
||||
if not isinstance(sub_id, str):
|
||||
if not lang:
|
||||
continue
|
||||
sub_id = lang
|
||||
sub_ext = sub.get('ext')
|
||||
if not isinstance(sub_ext, str):
|
||||
if not ext:
|
||||
sub.pop('ext', None)
|
||||
else:
|
||||
sub['ext'] = ext
|
||||
result[sub_id].append(sub)
|
||||
result = dict(result)
|
||||
|
||||
|
@ -452,9 +458,9 @@ def trim(s):
|
|||
return trim
|
||||
|
||||
|
||||
def unpack(func):
|
||||
def unpack(func, **kwargs):
|
||||
@functools.wraps(func)
|
||||
def inner(items, **kwargs):
|
||||
def inner(items):
|
||||
return func(*items, **kwargs)
|
||||
|
||||
return inner
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Autogenerated by devscripts/update-version.py
|
||||
|
||||
__version__ = '2024.11.04'
|
||||
__version__ = '2024.11.18'
|
||||
|
||||
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
|
||||
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
|
||||
|
||||
VARIANT = None
|
||||
|
||||
|
@ -12,4 +12,4 @@
|
|||
|
||||
ORIGIN = 'yt-dlp/yt-dlp'
|
||||
|
||||
_pkg_version = '2024.11.04'
|
||||
_pkg_version = '2024.11.18'
|
||||
|
|
Loading…
Reference in New Issue
Block a user