mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-26 21:47:18 +01:00
Compare commits
21 Commits
1ffc6f95bd
...
b201db59d3
Author | SHA1 | Date | |
---|---|---|---|
|
b201db59d3 | ||
|
f919729538 | ||
|
7ea2787920 | ||
|
f7257588bd | ||
|
da252d9d32 | ||
|
a91d9e1084 | ||
|
c34166d7c8 | ||
|
b35550248a | ||
|
7a9dd3d35f | ||
|
8d87bb4d91 | ||
|
65f91148fc | ||
|
6169b3eca8 | ||
|
29278a3323 | ||
|
7a67a2028f | ||
|
dbf350c122 | ||
|
8451074b50 | ||
|
176a156c65 | ||
|
e092ba9922 | ||
|
5e3894df3f | ||
|
af03fa4542 | ||
|
da0d84258b |
12
CONTRIBUTORS
12
CONTRIBUTORS
|
@ -695,3 +695,15 @@ KBelmin
|
|||
kesor
|
||||
MellowKyler
|
||||
Wesley107772
|
||||
a13ssandr0
|
||||
ChocoLZS
|
||||
doe1080
|
||||
hugovdev
|
||||
jshumphrey
|
||||
julionc
|
||||
manavchaudhary1
|
||||
powergold1
|
||||
Sakura286
|
||||
SamDecrock
|
||||
stratus-ss
|
||||
subrat-lima
|
||||
|
|
58
Changelog.md
58
Changelog.md
|
@ -4,6 +4,64 @@ # Changelog
|
|||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||
-->
|
||||
|
||||
### 2024.11.18
|
||||
|
||||
#### Important changes
|
||||
- **Login with OAuth is no longer supported for YouTube**
|
||||
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
|
||||
|
||||
#### Core changes
|
||||
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
|
||||
- **utils**
|
||||
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
|
||||
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
|
||||
|
||||
#### Extractor changes
|
||||
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
|
||||
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
|
||||
- **chaturbate**
|
||||
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
|
||||
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
|
||||
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
|
||||
- **ctvnews**
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
|
||||
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
|
||||
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
|
||||
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
|
||||
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
|
||||
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
|
||||
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
|
||||
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
|
||||
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
|
||||
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
|
||||
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
|
||||
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **spreaker**
|
||||
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
|
||||
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
|
||||
- **youtube**
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Misc. changes
|
||||
- **build**
|
||||
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
|
||||
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
|
||||
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
|
||||
- **cleanup**
|
||||
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
|
||||
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
|
||||
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.04
|
||||
|
||||
#### Important changes
|
||||
|
|
|
@ -342,8 +342,9 @@ ## General Options:
|
|||
extractor plugins; postprocessor plugins can
|
||||
only be loaded from the default plugin
|
||||
directories
|
||||
--flat-playlist Do not extract the videos of a playlist,
|
||||
only list them
|
||||
--flat-playlist Do not extract a playlist's URL result
|
||||
entries; some entry metadata may be missing
|
||||
and downloading may be bypassed
|
||||
--no-flat-playlist Fully extract the videos of a playlist
|
||||
(default)
|
||||
--live-from-start Download livestreams from the start.
|
||||
|
@ -1866,9 +1867,6 @@ #### orfon (orf:on)
|
|||
#### bilibili
|
||||
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
||||
|
||||
#### digitalconcerthall
|
||||
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
|
||||
|
||||
#### sonylivseries
|
||||
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
||||
|
||||
|
|
|
@ -234,5 +234,10 @@
|
|||
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
|
||||
"short": "[ie/vimeo] Fix API retries (#11351)",
|
||||
"authors": ["bashonly"]
|
||||
},
|
||||
{
|
||||
"action": "add",
|
||||
"when": "52c0ffe40ad6e8404d93296f575007b05b04c686",
|
||||
"short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)"
|
||||
}
|
||||
]
|
||||
|
|
|
@ -129,6 +129,8 @@ # Supported sites
|
|||
- **Bandcamp:album**
|
||||
- **Bandcamp:user**
|
||||
- **Bandcamp:weekly**
|
||||
- **Bandlab**
|
||||
- **BandlabPlaylist**
|
||||
- **BannedVideo**
|
||||
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||
|
@ -484,6 +486,7 @@ # Supported sites
|
|||
- **Gab**
|
||||
- **GabTV**
|
||||
- **Gaia**: [*gaia*](## "netrc machine")
|
||||
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
|
||||
- **GameJolt**
|
||||
- **GameJoltCommunity**
|
||||
- **GameJoltGame**
|
||||
|
@ -651,6 +654,8 @@ # Supported sites
|
|||
- **Karaoketv**
|
||||
- **Katsomo**: (**Currently broken**)
|
||||
- **KelbyOne**: (**Currently broken**)
|
||||
- **Kenh14Playlist**
|
||||
- **Kenh14Video**
|
||||
- **Ketnet**
|
||||
- **khanacademy**
|
||||
- **khanacademy:unit**
|
||||
|
@ -784,10 +789,6 @@ # Supported sites
|
|||
- **MicrosoftLearnSession**
|
||||
- **MicrosoftMedius**
|
||||
- **microsoftstream**: Microsoft Stream
|
||||
- **mildom**: Record ongoing live by specific user in Mildom
|
||||
- **mildom:clip**: Clip in Mildom
|
||||
- **mildom:user:vod**: Download all VODs from specific user in Mildom
|
||||
- **mildom:vod**: VOD in Mildom
|
||||
- **minds**
|
||||
- **minds:channel**
|
||||
- **minds:group**
|
||||
|
@ -798,6 +799,7 @@ # Supported sites
|
|||
- **MiTele**: mitele.es
|
||||
- **mixch**
|
||||
- **mixch:archive**
|
||||
- **mixch:movie**
|
||||
- **mixcloud**
|
||||
- **mixcloud:playlist**
|
||||
- **mixcloud:user**
|
||||
|
@ -1060,8 +1062,8 @@ # Supported sites
|
|||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||
- **phoenix.de**
|
||||
- **Photobucket**
|
||||
- **PiaLive**
|
||||
- **Piapro**: [*piapro*](## "netrc machine")
|
||||
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
|
||||
- **Picarto**
|
||||
- **PicartoVod**
|
||||
- **Piksel**
|
||||
|
@ -1088,8 +1090,6 @@ # Supported sites
|
|||
- **PodbayFMChannel**
|
||||
- **Podchaser**
|
||||
- **podomatic**: (**Currently broken**)
|
||||
- **Pokemon**
|
||||
- **PokemonWatch**
|
||||
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||
- **PolsatGo**
|
||||
|
@ -1160,6 +1160,7 @@ # Supported sites
|
|||
- **RadioJavan**: (**Currently broken**)
|
||||
- **radiokapital**
|
||||
- **radiokapital:show**
|
||||
- **RadioRadicale**
|
||||
- **RadioZetPodcast**
|
||||
- **radlive**
|
||||
- **radlive:channel**
|
||||
|
@ -1367,9 +1368,7 @@ # Supported sites
|
|||
- **spotify**: Spotify episodes (**Currently broken**)
|
||||
- **spotify:show**: Spotify shows (**Currently broken**)
|
||||
- **Spreaker**
|
||||
- **SpreakerPage**
|
||||
- **SpreakerShow**
|
||||
- **SpreakerShowPage**
|
||||
- **SpringboardPlatform**
|
||||
- **Sprout**
|
||||
- **SproutVideo**
|
||||
|
@ -1570,6 +1569,8 @@ # Supported sites
|
|||
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||
- **ukcolumn**: (**Currently broken**)
|
||||
- **UKTVPlay**
|
||||
- **UlizaPlayer**
|
||||
- **UlizaPortal**: ulizaportal.jp
|
||||
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
||||
- **Unistra**
|
||||
- **Unity**: (**Currently broken**)
|
||||
|
@ -1587,8 +1588,6 @@ # Supported sites
|
|||
- **Varzesh3**: (**Currently broken**)
|
||||
- **Vbox7**
|
||||
- **Veo**
|
||||
- **Veoh**
|
||||
- **veoh:user**
|
||||
- **Vesti**: Вести.Ru (**Currently broken**)
|
||||
- **Vevo**
|
||||
- **VevoPlaylist**
|
||||
|
|
359
test/test_parsing.py
Normal file
359
test/test_parsing.py
Normal file
|
@ -0,0 +1,359 @@
|
|||
import textwrap
|
||||
import unittest
|
||||
|
||||
from yt_dlp.compat import compat_HTMLParseError
|
||||
from yt_dlp.parsing import (
|
||||
MatchingElementParser,
|
||||
HTMLIgnoreRanges,
|
||||
HTMLTagParser,
|
||||
)
|
||||
|
||||
extract_attributes = MatchingElementParser.extract_attributes
|
||||
get_element_by_attribute = MatchingElementParser.get_element_by_attribute
|
||||
get_element_by_class = MatchingElementParser.get_element_by_class
|
||||
get_element_html_by_attribute = MatchingElementParser.get_element_html_by_attribute
|
||||
get_element_html_by_class = MatchingElementParser.get_element_html_by_class
|
||||
get_element_text_and_html_by_tag = MatchingElementParser.get_element_text_and_html_by_tag
|
||||
get_elements_by_attribute = MatchingElementParser.get_elements_by_attribute
|
||||
get_elements_by_class = MatchingElementParser.get_elements_by_class
|
||||
get_elements_html_by_attribute = MatchingElementParser.get_elements_html_by_attribute
|
||||
get_elements_html_by_class = MatchingElementParser.get_elements_html_by_class
|
||||
get_elements_text_and_html_by_attribute = MatchingElementParser.get_elements_text_and_html_by_attribute
|
||||
get_elements_text_and_html_by_tag = MatchingElementParser.get_elements_text_and_html_by_tag
|
||||
|
||||
|
||||
class TestParsing(unittest.TestCase):
|
||||
def test_extract_attributes(self):
|
||||
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e x=y>'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e x="a \'b\' c">'), {'x': "a 'b' c"})
|
||||
self.assertEqual(extract_attributes('<e x=\'a "b" c\'>'), {'x': 'a "b" c'})
|
||||
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e x="&">'), {'x': '&'}) # XML
|
||||
self.assertEqual(extract_attributes('<e x=""">'), {'x': '"'})
|
||||
self.assertEqual(extract_attributes('<e x="£">'), {'x': '£'}) # HTML 3.2
|
||||
self.assertEqual(extract_attributes('<e x="λ">'), {'x': 'λ'}) # HTML 4.0
|
||||
self.assertEqual(extract_attributes('<e x="&foo">'), {'x': '&foo'})
|
||||
self.assertEqual(extract_attributes('<e x="\'">'), {'x': "'"})
|
||||
self.assertEqual(extract_attributes('<e x=\'"\'>'), {'x': '"'})
|
||||
self.assertEqual(extract_attributes('<e x >'), {'x': None})
|
||||
self.assertEqual(extract_attributes('<e x=y a>'), {'x': 'y', 'a': None})
|
||||
self.assertEqual(extract_attributes('<e x= y>'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e x=1 y=2 x=3>'), {'y': '2', 'x': '3'})
|
||||
self.assertEqual(extract_attributes('<e \nx=\ny\n>'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e \nx=\n"y"\n>'), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes("<e \nx=\n'y'\n>"), {'x': 'y'})
|
||||
self.assertEqual(extract_attributes('<e \nx="\ny\n">'), {'x': '\ny\n'})
|
||||
self.assertEqual(extract_attributes('<e CAPS=x>'), {'caps': 'x'}) # Names lowercased
|
||||
self.assertEqual(extract_attributes('<e x=1 X=2>'), {'x': '2'})
|
||||
self.assertEqual(extract_attributes('<e X=1 x=2>'), {'x': '2'})
|
||||
self.assertEqual(extract_attributes('<e _:funny-name1=1>'), {'_:funny-name1': '1'})
|
||||
self.assertEqual(extract_attributes('<e x="Fáilte 世界 \U0001f600">'), {'x': 'Fáilte 世界 \U0001f600'})
|
||||
self.assertEqual(extract_attributes('<e x="décomposé">'), {'x': 'décompose\u0301'})
|
||||
# "Narrow" Python builds don't support unicode code points outside BMP.
|
||||
try:
|
||||
chr(0x10000)
|
||||
supports_outside_bmp = True
|
||||
except ValueError:
|
||||
supports_outside_bmp = False
|
||||
if supports_outside_bmp:
|
||||
self.assertEqual(extract_attributes('<e x="Smile 😀!">'), {'x': 'Smile \U0001f600!'})
|
||||
# Malformed HTML should not break attributes extraction on older Python
|
||||
self.assertEqual(extract_attributes('<mal"formed/>'), {})
|
||||
|
||||
GET_ELEMENT_BY_CLASS_TEST_STRING = '''
|
||||
<span class="foo bar">nice</span>
|
||||
<div class="foo bar">also nice</div>
|
||||
'''
|
||||
|
||||
def test_get_element_by_class(self):
|
||||
html = self.GET_ELEMENT_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_by_class('foo', html), 'nice')
|
||||
self.assertEqual(get_element_by_class('no-such-class', html), None)
|
||||
|
||||
def test_get_element_html_by_class(self):
|
||||
html = self.GET_ELEMENT_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_html_by_class('foo', html),
|
||||
'<span class="foo bar">nice</span>')
|
||||
self.assertEqual(get_element_by_class('no-such-class', html), None)
|
||||
|
||||
GET_ELEMENT_BY_ATTRIBUTE_TEST_STRING = '''
|
||||
<div itemprop="author" itemscope>foo</div>
|
||||
'''
|
||||
|
||||
def test_get_element_by_attribute(self):
|
||||
html = self.GET_ELEMENT_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_by_attribute('class', 'foo bar', html), 'nice')
|
||||
self.assertEqual(get_element_by_attribute('class', 'foo', html), None)
|
||||
self.assertEqual(get_element_by_attribute('class', 'no-such-foo', html), None)
|
||||
self.assertEqual(get_element_by_attribute('class', 'foo bar', html, tag='div'), 'also nice')
|
||||
|
||||
html = self.GET_ELEMENT_BY_ATTRIBUTE_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_by_attribute('itemprop', 'author', html), 'foo')
|
||||
|
||||
def test_get_element_html_by_attribute(self):
|
||||
html = self.GET_ELEMENT_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_html_by_attribute('class', 'foo bar', html),
|
||||
'<span class="foo bar">nice</span>')
|
||||
self.assertEqual(get_element_html_by_attribute('class', 'foo', html), None)
|
||||
self.assertEqual(get_element_html_by_attribute('class', 'no-such-foo', html), None)
|
||||
|
||||
html = self.GET_ELEMENT_BY_ATTRIBUTE_TEST_STRING
|
||||
|
||||
self.assertEqual(get_element_html_by_attribute('itemprop', 'author', html), html.strip())
|
||||
|
||||
GET_ELEMENTS_BY_CLASS_TEST_STRING = '''
|
||||
<span class="foo bar">nice</span>
|
||||
<span class="foo bar">also nice</span>
|
||||
'''
|
||||
GET_ELEMENTS_BY_CLASS_RES = [
|
||||
'<span class="foo bar">nice</span>',
|
||||
'<span class="foo bar">also nice</span>'
|
||||
]
|
||||
|
||||
def test_get_elements_by_class(self):
|
||||
html = self.GET_ELEMENTS_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_elements_by_class('foo', html), ['nice', 'also nice'])
|
||||
self.assertEqual(get_elements_by_class('no-such-class', html), [])
|
||||
|
||||
def test_get_elements_html_by_class(self):
|
||||
html = self.GET_ELEMENTS_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_elements_html_by_class('foo', html), self.GET_ELEMENTS_BY_CLASS_RES)
|
||||
self.assertEqual(get_elements_html_by_class('no-such-class', html), [])
|
||||
|
||||
def test_get_elements_by_attribute(self):
|
||||
html = self.GET_ELEMENTS_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_elements_by_attribute('class', 'foo bar', html), ['nice', 'also nice'])
|
||||
self.assertEqual(get_elements_by_attribute('class', 'foo', html), [])
|
||||
self.assertEqual(get_elements_by_attribute('class', 'no-such-foo', html), [])
|
||||
|
||||
def test_get_elements_html_by_attribute(self):
|
||||
html = self.GET_ELEMENTS_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(get_elements_html_by_attribute('class', 'foo bar', html),
|
||||
self.GET_ELEMENTS_BY_CLASS_RES)
|
||||
self.assertEqual(get_elements_html_by_attribute('class', 'foo', html), [])
|
||||
self.assertEqual(get_elements_html_by_attribute('class', 'no-such-foo', html), [])
|
||||
|
||||
def test_get_elements_text_and_html_by_attribute(self):
|
||||
html = self.GET_ELEMENTS_BY_CLASS_TEST_STRING
|
||||
|
||||
self.assertEqual(
|
||||
get_elements_text_and_html_by_attribute('class', 'foo bar', html),
|
||||
list(zip(['nice', 'also nice'], self.GET_ELEMENTS_BY_CLASS_RES)))
|
||||
self.assertEqual(get_elements_text_and_html_by_attribute('class', 'foo', html), [])
|
||||
self.assertEqual(get_elements_text_and_html_by_attribute('class', 'no-such-foo', html), [])
|
||||
|
||||
self.assertEqual(get_elements_text_and_html_by_attribute(
|
||||
'class', 'foo', '<a class="foo">nice</a><span class="foo">not nice</span>', tag='a'),
|
||||
[('nice', '<a class="foo">nice</a>')])
|
||||
|
||||
def test_get_element_text_and_html_by_tag(self):
|
||||
get_element_by_tag_test_string = '''
|
||||
random text lorem ipsum</p>
|
||||
<div>
|
||||
this should be returned
|
||||
<span>this should also be returned</span>
|
||||
<div>
|
||||
this should also be returned
|
||||
</div>
|
||||
closing tag above should not trick, so this should also be returned
|
||||
</div>
|
||||
but this text should not be returned
|
||||
'''
|
||||
html = textwrap.indent(textwrap.dedent(get_element_by_tag_test_string), ' ' * 4)
|
||||
get_element_by_tag_res_outerdiv_html = html.strip()[32:276]
|
||||
get_element_by_tag_res_outerdiv_text = get_element_by_tag_res_outerdiv_html[5:-6]
|
||||
get_element_by_tag_res_innerspan_html = html.strip()[78:119]
|
||||
get_element_by_tag_res_innerspan_text = get_element_by_tag_res_innerspan_html[6:-7]
|
||||
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('div', html),
|
||||
(get_element_by_tag_res_outerdiv_text, get_element_by_tag_res_outerdiv_html))
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('span', html),
|
||||
(get_element_by_tag_res_innerspan_text, get_element_by_tag_res_innerspan_html))
|
||||
self.assertIsNone(get_element_text_and_html_by_tag('article', html))
|
||||
|
||||
def test_get_elements_text_and_html_by_tag(self):
|
||||
class StrictParser(MatchingElementParser):
|
||||
STRICT = True
|
||||
|
||||
test_string = '''
|
||||
<img src="a.png">
|
||||
<img src="b.png" />
|
||||
<span>ignore</span>
|
||||
'''
|
||||
items = get_elements_text_and_html_by_tag('img', test_string)
|
||||
self.assertEqual(items, [('', '<img src="a.png">'), ('', '<img src="b.png" />')])
|
||||
|
||||
self.assertEqual(
|
||||
StrictParser.get_element_text_and_html_by_tag('use', '<use><img></use>'),
|
||||
('<img>', '<use><img></use>'))
|
||||
|
||||
def test_get_element_text_and_html_by_tag_malformed(self):
|
||||
inner_text = 'inner text'
|
||||
malnested_elements = f'<malnested_a><malnested_b>{inner_text}</malnested_a></malnested_b>'
|
||||
commented_html = '<!--<div>inner comment</div>-->'
|
||||
outerdiv_html = f'<div>{malnested_elements}</div>'
|
||||
html = f'{commented_html}{outerdiv_html}'
|
||||
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('div', html), (malnested_elements, outerdiv_html))
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('malnested_a', html),
|
||||
(f'<malnested_b>{inner_text}',
|
||||
f'<malnested_a><malnested_b>{inner_text}</malnested_a>'))
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('malnested_b', html),
|
||||
(f'{inner_text}</malnested_a>',
|
||||
f'<malnested_b>{inner_text}</malnested_a></malnested_b>'))
|
||||
self.assertEqual(
|
||||
get_element_text_and_html_by_tag('orphan', f'<orphan>{html}'), ('', '<orphan>'))
|
||||
self.assertIsNone(get_element_text_and_html_by_tag('orphan', f'{html}</orphan>'))
|
||||
|
||||
# ignore case on tags
|
||||
ci_html = f'<SpAn>{html}</sPaN>'
|
||||
self.assertEqual(get_element_text_and_html_by_tag('span', ci_html), (html, ci_html))
|
||||
|
||||
def test_strict_html_parsing(self):
|
||||
class StrictTagParser(HTMLTagParser):
|
||||
STRICT = True
|
||||
|
||||
parser = StrictTagParser()
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "stray closing tag 'p'"):
|
||||
parser.taglist('</p>', reset=True)
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "unclosed tag 'p', 'div'"):
|
||||
parser.taglist('<div><p>', reset=True)
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "malnested closing tag 'div', expected after '</p>'"):
|
||||
parser.taglist('<div><p></div></p>', reset=True)
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "malnested closing tag 'div', expected after '</p>'"):
|
||||
parser.taglist('<div><p>/p></div>', reset=True)
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "malformed closing tag 'p<<'"):
|
||||
parser.taglist('<div><p></p<< </div>', reset=True)
|
||||
with self.assertRaisesRegex(compat_HTMLParseError, "stray closing tag 'img'"):
|
||||
parser.taglist('<img>must be empty</img>', reset=True)
|
||||
|
||||
def test_relaxed_html_parsing(self):
|
||||
Tag = HTMLTagParser.Tag
|
||||
parser = HTMLTagParser()
|
||||
|
||||
self.assertEqual(parser.taglist('</p>', reset=True), [])
|
||||
|
||||
tags = parser.taglist('<div><p>', reset=True)
|
||||
self.assertEqual(tags, [Tag('div'), Tag('p')])
|
||||
self.assertEqual(tags[0].text_and_html(), ('', '<div>'))
|
||||
self.assertEqual(tags[1].text_and_html(), ('', '<p>'))
|
||||
|
||||
tags = parser.taglist('<div><p></div></p>', reset=True)
|
||||
self.assertEqual(tags, [Tag('div'), Tag('p')])
|
||||
self.assertEqual(tags[0].text_and_html(), ('<p>', '<div><p></div>'))
|
||||
self.assertEqual(tags[1].text_and_html(), ('</div>', '<p></div></p>'))
|
||||
|
||||
tags = parser.taglist('<div><p>/p></div>', reset=True)
|
||||
self.assertEqual(tags, [Tag('div'), Tag('p')])
|
||||
self.assertEqual(tags[0].text_and_html(), ('<p>/p>', '<div><p>/p></div>'))
|
||||
self.assertEqual(tags[1].text_and_html(), ('', '<p>'))
|
||||
|
||||
tags = parser.taglist('<div><p>paragraph</p<ignored></div>', reset=True)
|
||||
self.assertEqual(tags, [Tag('div'), Tag('p')])
|
||||
self.assertEqual(tags[0].text_and_html(),
|
||||
('<p>paragraph</p<ignored>', '<div><p>paragraph</p<ignored></div>'))
|
||||
self.assertEqual(tags[1].text_and_html(), ('paragraph', '<p>paragraph</p<ignored>'))
|
||||
|
||||
tags = parser.taglist('<img width="300px">must be empty</img>', reset=True)
|
||||
self.assertEqual(tags, [Tag('img')])
|
||||
self.assertEqual(tags[0].text_and_html(), ('', '<img width="300px">'))
|
||||
|
||||
def test_compliant_html_parsing(self):
|
||||
# certain elements don't need to be closed (see HTMLTagParser.VOID_TAGS)
|
||||
Tag = HTMLTagParser.Tag
|
||||
html = '''
|
||||
no error without closing tag: <img>
|
||||
self closing is ok: <img />
|
||||
'''
|
||||
parser = HTMLTagParser()
|
||||
tags = parser.taglist(html, reset=True)
|
||||
self.assertEqual(tags, [Tag('img'), Tag('img')])
|
||||
|
||||
# don't get fooled by '>' in attributes
|
||||
html = '''<img greater_a='1>0' greater_b="1>0">'''
|
||||
tags = parser.taglist(html, reset=True)
|
||||
self.assertEqual(tags[0].text_and_html(), ('', html))
|
||||
|
||||
def test_tag_return_order(self):
|
||||
Tag = HTMLTagParser.Tag
|
||||
html = '''
|
||||
<t0>
|
||||
<t1>
|
||||
<t2>
|
||||
<t3 /> <t4 />
|
||||
</t2>
|
||||
</t1>
|
||||
<t5>
|
||||
<t6 />
|
||||
</t5>
|
||||
</t0>
|
||||
<t7>
|
||||
<t8 />
|
||||
</t7>
|
||||
'''
|
||||
parser = HTMLTagParser()
|
||||
tags = parser.taglist(html, reset=True)
|
||||
self.assertEqual(
|
||||
str(tags), str([Tag('t0'), Tag('t1'), Tag('t2'), Tag('t3'), Tag('t4'),
|
||||
Tag('t5'), Tag('t6'), Tag('t7'), Tag('t8')]))
|
||||
|
||||
tags = parser.taglist(html, reset=True, depth_first=True)
|
||||
self.assertEqual(
|
||||
str(tags), str([Tag('t3'), Tag('t4'), Tag('t2'), Tag('t1'), Tag('t6'),
|
||||
Tag('t5'), Tag('t0'), Tag('t8'), Tag('t7')]))
|
||||
|
||||
# return tags in nested order
|
||||
tags = parser.taglist(html, reset=True, depth_first=None)
|
||||
self.assertEqual(
|
||||
str(tags), str([
|
||||
[Tag('t0'),
|
||||
[Tag('t1'),
|
||||
[Tag('t2'), [Tag('t3')], [Tag('t4')]]],
|
||||
[Tag('t5'), [Tag('t6')]]],
|
||||
[Tag('t7'), [Tag('t8')]]]))
|
||||
|
||||
def test_html_ignored_ranges(self):
|
||||
def mark_comments(_string, char='^', nochar='-'):
|
||||
cmts = HTMLIgnoreRanges(_string)
|
||||
return "".join(char if _idx in cmts else nochar for _idx in range(len(_string)))
|
||||
|
||||
html_string = '''
|
||||
no comments in this line
|
||||
---------------------------------------------------------------------
|
||||
<!-- whole line represents a comment -->
|
||||
----^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
|
||||
before <!-- comment --> after
|
||||
-----------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^-----------
|
||||
this is a leftover comment --> <!-- a new comment without closing
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
here is <!-- a comment --> and <!-- another comment --!> end
|
||||
----------------^^^^^^^^^^^----------------^^^^^^^^^^^^^^^^^---------
|
||||
<script> ignore here </script> <SCRIPT> and here </SCRIPT>
|
||||
--------^^^^^^^^^^^^^-----------------------------^^^^^^^^^^---------
|
||||
'''
|
||||
|
||||
lines = textwrap.dedent(html_string).strip().splitlines()
|
||||
for line, marker in zip(lines[0::2], lines[1::2]):
|
||||
self.assertEqual((line, mark_comments(line)), (line, marker))
|
||||
|
||||
# yet we must be able to match script elements
|
||||
test_string = '''<script type="text/javascript">var foo = 'bar';</script>'''
|
||||
items = get_element_text_and_html_by_tag('script', test_string)
|
||||
self.assertEqual(items, ("var foo = 'bar';", test_string))
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
|
|
|
@ -3767,7 +3767,7 @@ def _merge_subtitles(cls, *dicts, target=None):
|
|||
""" Merge subtitle dictionaries, language by language. """
|
||||
if target is None:
|
||||
target = {}
|
||||
for d in dicts:
|
||||
for d in filter(None, dicts):
|
||||
for lang, subs in d.items():
|
||||
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
|
||||
return target
|
||||
|
|
|
@ -176,7 +176,7 @@ def _real_extract(self, url):
|
|||
self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||
traverse_obj(webpage, (
|
||||
{find_element(tag='jasper-player-container', html=True)},
|
||||
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId'))
|
||||
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str}))
|
||||
]
|
||||
|
||||
return self.playlist_result(entries, page_id)
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
jwt_decode_hs256,
|
||||
parse_codecs,
|
||||
try_get,
|
||||
url_or_none,
|
||||
|
@ -13,9 +16,6 @@
|
|||
class DigitalConcertHallIE(InfoExtractor):
|
||||
IE_DESC = 'DigitalConcertHall extractor'
|
||||
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_ACCESS_TOKEN = None
|
||||
_NETRC_MACHINE = 'digitalconcerthall'
|
||||
_TESTS = [{
|
||||
'note': 'Playlist with only one video',
|
||||
|
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
|
|||
'params': {'skip_download': 'm3u8'},
|
||||
'playlist_count': 1,
|
||||
}]
|
||||
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
|
||||
'is the "access_token_production" from your browser local storage')
|
||||
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_CLIENT_ID = 'dch.webapp'
|
||||
_CLIENT_SECRET = '2ySLN+2Fwb'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_OAUTH_HEADERS = {
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Origin': 'https://www.digitalconcerthall.com',
|
||||
'Referer': 'https://www.digitalconcerthall.com/',
|
||||
'User-Agent': _USER_AGENT,
|
||||
}
|
||||
_access_token = None
|
||||
_access_token_expiry = 0
|
||||
_refresh_token = None
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_token = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
|
||||
@property
|
||||
def _access_token_is_expired(self):
|
||||
return self._access_token_expiry - 30 <= int(time.time())
|
||||
|
||||
def _set_access_token(self, value):
|
||||
self._access_token = value
|
||||
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
|
||||
|
||||
def _cache_tokens(self, /):
|
||||
self.cache.store(self._NETRC_MACHINE, 'tokens', {
|
||||
'access_token': self._access_token,
|
||||
'refresh_token': self._refresh_token,
|
||||
})
|
||||
|
||||
def _fetch_new_tokens(self, invalidate=False):
|
||||
if invalidate:
|
||||
self.report_warning('Access token has been invalidated')
|
||||
self._set_access_token(None)
|
||||
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
if not self._refresh_token:
|
||||
self._set_access_token(None)
|
||||
self._cache_tokens()
|
||||
raise ExtractorError(
|
||||
'Access token has expired or been invalidated. '
|
||||
'Get a new "access_token_production" value from your browser '
|
||||
f'and try again, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
|
||||
bearer_token = self._access_token or self._download_json(
|
||||
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
|
||||
data=urlencode_postdata({
|
||||
'affiliate': 'none',
|
||||
'grant_type': 'device',
|
||||
'device_vendor': 'unknown',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
|
||||
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
|
||||
'app_id': 'dch.webapp',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
|
||||
# but this is no longer effective since actual login is not possible anymore
|
||||
'device_model': 'unknown',
|
||||
'app_id': self._CLIENT_ID,
|
||||
'app_distributor': 'berlinphil',
|
||||
'app_version': '1.84.0',
|
||||
'client_secret': '2ySLN+2Fwb',
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})['access_token']
|
||||
'app_version': '1.95.0',
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers=self._OAUTH_HEADERS)['access_token']
|
||||
|
||||
try:
|
||||
login_response = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
|
||||
'grant_type': 'password',
|
||||
'username': username,
|
||||
'password': password,
|
||||
response = self._download_json(
|
||||
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
|
||||
data=urlencode_postdata({
|
||||
'grant_type': 'refresh_token',
|
||||
'refresh_token': self._refresh_token,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Referer': 'https://www.digitalconcerthall.com',
|
||||
'Authorization': f'Bearer {login_token}',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
**self._OAUTH_HEADERS,
|
||||
'Authorization': f'Bearer {bearer_token}',
|
||||
})
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
raise ExtractorError('Invalid username or password', expected=True)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
self._set_access_token(None)
|
||||
self._refresh_token = None
|
||||
self._cache_tokens()
|
||||
raise ExtractorError('Your tokens have been invalidated', expected=True)
|
||||
raise
|
||||
self._ACCESS_TOKEN = login_response['access_token']
|
||||
|
||||
self._set_access_token(response['access_token'])
|
||||
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
|
||||
self.write_debug('New refresh token granted')
|
||||
self._refresh_token = refresh_token
|
||||
self._cache_tokens()
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
self.report_login()
|
||||
|
||||
if username == 'refresh':
|
||||
self._refresh_token = password
|
||||
self._fetch_new_tokens()
|
||||
|
||||
if username == 'token':
|
||||
if not traverse_obj(password, {jwt_decode_hs256}):
|
||||
raise ExtractorError(
|
||||
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
|
||||
self._set_access_token(password)
|
||||
self._cache_tokens()
|
||||
|
||||
if username in ('refresh', 'token'):
|
||||
if self.get_param('cachedir') is not False:
|
||||
token_type = 'access' if username == 'token' else 'refresh'
|
||||
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
|
||||
'token next time, pass --username cache along with any password')
|
||||
return
|
||||
|
||||
if username != 'cache':
|
||||
raise ExtractorError(
|
||||
'Login with username and password is no longer supported '
|
||||
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# Try cached access_token
|
||||
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
|
||||
self._set_access_token(cached_tokens.get('access_token'))
|
||||
self._refresh_token = cached_tokens.get('refresh_token')
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
# Try cached refresh_token
|
||||
self._fetch_new_tokens(invalidate=True)
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._ACCESS_TOKEN:
|
||||
self.raise_login_required(method='password')
|
||||
if not self._access_token:
|
||||
self.raise_login_required(
|
||||
'All content on this site is only available for registered users. '
|
||||
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
|
||||
|
||||
def _entries(self, items, language, type_, **kwargs):
|
||||
for item in items:
|
||||
video_id = item['id']
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
|
||||
for should_retry in (True, False):
|
||||
self._fetch_new_tokens(invalidate=not should_retry)
|
||||
try:
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._access_token}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
break
|
||||
except ExtractorError as error:
|
||||
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
continue
|
||||
raise
|
||||
|
||||
formats = []
|
||||
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
||||
|
@ -157,7 +255,6 @@ def _real_extract(self, url):
|
|||
'Accept': 'application/json',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
})
|
||||
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
||||
|
||||
|
|
|
@ -569,7 +569,7 @@ def extract_dash_manifest(vid_data, formats, mpd_url=None):
|
|||
if dash_manifest:
|
||||
formats.extend(self._parse_mpd_formats(
|
||||
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
|
||||
mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url))
|
||||
mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url))
|
||||
|
||||
def process_formats(info):
|
||||
# Downloads with browser's User-Agent are rate limited. Working around
|
||||
|
|
|
@ -259,6 +259,8 @@ def _real_extract(self, url):
|
|||
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, json.JSONDecodeError):
|
||||
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
|
||||
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
|
||||
self.raise_login_required('Account authentication is required')
|
||||
raise
|
||||
|
||||
|
|
|
@ -13,7 +13,10 @@
|
|||
unified_timestamp,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
from ..utils.traversal import (
|
||||
subs_list_to_dict,
|
||||
traverse_obj,
|
||||
)
|
||||
|
||||
|
||||
class RutubeBaseIE(InfoExtractor):
|
||||
|
@ -92,11 +95,11 @@ def _extract_formats_and_subtitles(self, options, video_id):
|
|||
hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
for caption in traverse_obj(options, ('captions', lambda _, v: url_or_none(v['file']))):
|
||||
subtitles.setdefault(caption.get('code') or 'ru', []).append({
|
||||
'url': caption['file'],
|
||||
'name': caption.get('langTitle'),
|
||||
})
|
||||
self._merge_subtitles(traverse_obj(options, ('captions', ..., {
|
||||
'id': 'code',
|
||||
'url': 'file',
|
||||
'name': ('langTitle', {str}),
|
||||
}, all, {subs_list_to_dict(lang='ru')})), target=subtitles)
|
||||
return formats, subtitles
|
||||
|
||||
def _download_and_extract_formats_and_subtitles(self, video_id, query=None):
|
||||
|
|
|
@ -241,7 +241,7 @@ def _extract_info_dict(self, info, full_title=None, secret_token=None, extract_f
|
|||
format_urls.add(format_url)
|
||||
formats.append({
|
||||
'format_id': 'download',
|
||||
'ext': urlhandle_detect_ext(urlh) or 'mp3',
|
||||
'ext': urlhandle_detect_ext(urlh, default='mp3'),
|
||||
'filesize': int_or_none(urlh.headers.get('Content-Length')),
|
||||
'url': format_url,
|
||||
'quality': 10,
|
||||
|
|
|
@ -419,7 +419,9 @@ def _alias_callback(option, opt_str, value, parser, opts, nargs):
|
|||
general.add_option(
|
||||
'--flat-playlist',
|
||||
action='store_const', dest='extract_flat', const='in_playlist', default=False,
|
||||
help='Do not extract the videos of a playlist, only list them')
|
||||
help=(
|
||||
'Do not extract a playlist\'s URL result entries; '
|
||||
'some entry metadata may be missing and downloading may be bypassed'))
|
||||
general.add_option(
|
||||
'--no-flat-playlist',
|
||||
action='store_false', dest='extract_flat',
|
||||
|
|
348
yt_dlp/parsing.py
Normal file
348
yt_dlp/parsing.py
Normal file
|
@ -0,0 +1,348 @@
|
|||
import collections
|
||||
import contextlib
|
||||
import itertools
|
||||
import re
|
||||
from html.parser import HTMLParser
|
||||
|
||||
from .compat import compat_HTMLParseError
|
||||
from .utils import orderedSet
|
||||
|
||||
|
||||
class HTMLIgnoreRanges:
|
||||
"""check if an offset is within CDATA content elements (script, style) or XML comments
|
||||
|
||||
note:
|
||||
* given offsets must be in increasing order
|
||||
* no detection of nested constructs (e.g. comments within script tags)
|
||||
|
||||
usage:
|
||||
ranges = HTMLIgnoreRanges(html)
|
||||
if offset in ranges:
|
||||
...
|
||||
"""
|
||||
REGEX = re.compile(r'<!--|--!?>|</?\s*(?:script|style)\b[^>]*>', flags=re.IGNORECASE)
|
||||
|
||||
def __init__(self, html):
|
||||
self.html = html
|
||||
self._last_match = None
|
||||
self._final = False
|
||||
|
||||
def __contains__(self, offset):
|
||||
assert isinstance(offset, int)
|
||||
|
||||
if not self._final and (self._last_match is None or offset >= self._last_match.end()):
|
||||
match = self.REGEX.search(self.html, offset)
|
||||
if match:
|
||||
self._last_match = match
|
||||
else:
|
||||
self._final = True
|
||||
|
||||
if self._last_match is None:
|
||||
return False
|
||||
match_string = self._last_match.group()
|
||||
if match_string.startswith('</') or match_string in ('-->', '--!>'):
|
||||
return offset < self._last_match.start()
|
||||
return offset >= self._last_match.end()
|
||||
|
||||
|
||||
class HTMLTagParser(HTMLParser):
|
||||
"""HTML parser which returns found elements as instances of 'Tag'
|
||||
when STRICT=True can raise compat_HTMLParseError() on malformed HTML elements
|
||||
|
||||
usage:
|
||||
parser = HTMLTagParser()
|
||||
for tag_obj in parser.taglist(html):
|
||||
tag_obj.text_and_html()
|
||||
|
||||
"""
|
||||
|
||||
STRICT = False
|
||||
ANY_TAG_REGEX = re.compile(r'''<(?:"[^"]*"|'[^']*'|[^"'>])*?>''')
|
||||
VOID_TAGS = {
|
||||
'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input',
|
||||
'keygen', 'link', 'meta', 'param', 'source', 'track', 'wbr',
|
||||
}
|
||||
|
||||
class Tag:
|
||||
__slots__ = 'name', 'string', 'attrs', '_openrange', '_closerange'
|
||||
|
||||
def __init__(self, name, *, string='', attrs=()):
|
||||
self.name = name
|
||||
self.string = string
|
||||
self.attrs = tuple(attrs)
|
||||
self._openrange = None
|
||||
self._closerange = None
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
def __repr__(self):
|
||||
return f'{self.__class__.__name__}({str(self)!r})'
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other
|
||||
|
||||
def openrange(self, offset, startlen=0):
|
||||
if isinstance(offset, slice):
|
||||
self._openrange = offset
|
||||
else:
|
||||
self._openrange = slice(offset, offset + startlen)
|
||||
|
||||
def closerange(self, offset, stoplen=0):
|
||||
if isinstance(offset, slice):
|
||||
self._closerange = offset
|
||||
else:
|
||||
self._closerange = slice(offset, offset + stoplen)
|
||||
|
||||
def opentag(self):
|
||||
return self.string[self._openrange] if self._openrange else ''
|
||||
|
||||
def html(self):
|
||||
if not self._openrange:
|
||||
return ''
|
||||
if self._closerange:
|
||||
return self.string[self._openrange.start:self._closerange.stop]
|
||||
return self.string[self._openrange]
|
||||
|
||||
def text(self):
|
||||
if self._openrange and self._closerange:
|
||||
return self.string[self._openrange.stop:self._closerange.start]
|
||||
return ''
|
||||
|
||||
def text_and_html(self):
|
||||
return self.text(), self.html()
|
||||
|
||||
class AbortException(Exception):
|
||||
pass
|
||||
|
||||
def __init__(self):
|
||||
self.tagstack = collections.deque()
|
||||
self._nestedtags = [[]]
|
||||
super().__init__()
|
||||
self._offset = self.offset
|
||||
|
||||
def predicate(self, tag, attrs):
|
||||
""" return True for every encountered opening tag that should be processed """
|
||||
return True
|
||||
|
||||
def callback(self, tag_obj):
|
||||
""" this will be called when the requested tag is closed """
|
||||
|
||||
def reset(self):
|
||||
super().reset()
|
||||
self.tagstack.clear()
|
||||
|
||||
def taglist(self, data, reset=True, depth_first=False):
|
||||
""" parse data and return found tag objects
|
||||
@param data: html string
|
||||
@param reset: reset state
|
||||
@param depth_first: return order: as opened (False), as closed (True), nested (None)
|
||||
@return: list of Tag objects
|
||||
"""
|
||||
def flatten(_list, first=True):
|
||||
rlist = _list if first or not depth_first else itertools.chain(_list[1:], _list[:1])
|
||||
for item in rlist:
|
||||
if isinstance(item, list):
|
||||
yield from flatten(item, first=False)
|
||||
else:
|
||||
yield item
|
||||
|
||||
if reset:
|
||||
self.reset()
|
||||
with contextlib.suppress(HTMLTagParser.AbortException):
|
||||
self.feed(data)
|
||||
if self.STRICT and self.tagstack:
|
||||
orphans = ', '.join(map(repr, map(str, orderedSet(self.tagstack, lazy=True))))
|
||||
raise compat_HTMLParseError(f'unclosed tag {orphans}')
|
||||
taglist = self._nestedtags[0] if depth_first is None else list(flatten(self._nestedtags[0]))
|
||||
self._nestedtags = [[]]
|
||||
return taglist
|
||||
|
||||
def updatepos(self, i, j):
|
||||
offset = self._offset = super().updatepos(i, j)
|
||||
return offset
|
||||
|
||||
def handle_starttag(self, tag, attrs):
|
||||
try:
|
||||
# we use internal variable for performance reasons
|
||||
tag_text = getattr(self, '_HTMLParser__starttag_text')
|
||||
except AttributeError:
|
||||
tag_text = HTMLTagParser.ANY_TAG_REGEX.match(self.rawdata[self._offset:]).group()
|
||||
|
||||
tag_obj = tag
|
||||
tag_is_open = not (tag_text.endswith('/>') or tag in self.VOID_TAGS)
|
||||
if self.predicate(tag, attrs):
|
||||
tag_obj = self.Tag(tag, string=self.rawdata, attrs=attrs)
|
||||
tag_obj.openrange(self._offset, len(tag_text))
|
||||
nesting = [tag_obj]
|
||||
self._nestedtags[-1].append(nesting)
|
||||
if tag_is_open:
|
||||
self._nestedtags.append(nesting)
|
||||
else:
|
||||
self.callback(tag_obj)
|
||||
if tag_is_open:
|
||||
self.tagstack.appendleft(tag_obj)
|
||||
|
||||
handle_startendtag = handle_starttag
|
||||
|
||||
def handle_endtag(self, tag):
|
||||
if '<' in tag:
|
||||
if self.STRICT:
|
||||
raise compat_HTMLParseError(f'malformed closing tag {tag!r}')
|
||||
tag = tag[:tag.index('<')]
|
||||
|
||||
try:
|
||||
idx = self.tagstack.index(tag)
|
||||
if self.STRICT and idx:
|
||||
open_tags = ''.join(f'</{tag}>' for tag in itertools.islice(self.tagstack, idx))
|
||||
raise compat_HTMLParseError(
|
||||
f'malnested closing tag {tag!r}, expected after {open_tags!r}')
|
||||
tag_obj = self.tagstack[idx]
|
||||
self.tagstack.remove(tag)
|
||||
if isinstance(tag_obj, self.Tag):
|
||||
tag_obj.closerange(slice(self._offset, self.rawdata.find('>', self._offset) + 1))
|
||||
self._nestedtags.pop()
|
||||
self.callback(tag_obj)
|
||||
except ValueError as exc:
|
||||
if isinstance(exc, compat_HTMLParseError):
|
||||
raise
|
||||
if self.STRICT:
|
||||
raise compat_HTMLParseError(f'stray closing tag {tag!r}') from exc
|
||||
|
||||
|
||||
class MatchingElementParser(HTMLTagParser):
|
||||
""" optimized version of HTMLTagParser
|
||||
"""
|
||||
def __init__(self, matchfunc):
|
||||
super().__init__()
|
||||
self.matchfunc = matchfunc
|
||||
self.found_none = True
|
||||
|
||||
def reset(self):
|
||||
super().reset()
|
||||
self.found_none = True
|
||||
|
||||
def callback(self, tag_obj):
|
||||
raise self.AbortException()
|
||||
|
||||
def predicate(self, tag, attrs):
|
||||
if self.found_none and self.matchfunc(tag, attrs):
|
||||
self.found_none = False
|
||||
return True
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def class_value_regex(class_name):
|
||||
return rf'[\w\s\-]*(?<![\w\-]){re.escape(class_name)}(?![\w\-])[\w\s\-]*'
|
||||
|
||||
@staticmethod
|
||||
def matching_tag_regex(tag, attribute, value_regex, escape=True):
|
||||
if isinstance(value_regex, re.Pattern):
|
||||
value_regex = value_regex.pattern
|
||||
elif escape:
|
||||
value_regex = re.escape(value_regex)
|
||||
|
||||
return rf'''(?x)
|
||||
<(?i:{tag})
|
||||
(?:\s(?:[^>"'\\]|"[^"\\]*"|'[^'\\]*')*)?
|
||||
\s{re.escape(attribute)}\s*=\s*(?P<_q>['"])(?-x:{value_regex})(?P=_q)
|
||||
'''
|
||||
|
||||
@classmethod
|
||||
def iter_tags(cls, regex, html, *, matchfunc):
|
||||
ignored = HTMLIgnoreRanges(html)
|
||||
parser = cls(matchfunc)
|
||||
for match in re.finditer(regex, html):
|
||||
if match.start() not in ignored:
|
||||
yield from parser.taglist(html[match.start():], reset=True)
|
||||
|
||||
@classmethod
|
||||
def tags_by_name(cls, tag, html):
|
||||
def matchfunc(tag_str, _attrs):
|
||||
return tag_str == tag
|
||||
|
||||
tag_regex = rf'''<\s*(?i:{re.escape(tag)})(?:\s(?:[^>"'\\]|"[^"\\]*"|'[^'\\]*')*)?>'''
|
||||
yield from cls.iter_tags(tag_regex, html, matchfunc=matchfunc)
|
||||
|
||||
@classmethod
|
||||
def tags_by_attribute(cls, attribute, value, html, *, tag=r'[\w:.-]+', escape_value=True):
|
||||
def matchfunc(_tag_str, attrs):
|
||||
return any(attr == attribute and re.fullmatch(value, value_str)
|
||||
for attr, value_str in attrs)
|
||||
|
||||
tag_regex = cls.matching_tag_regex(tag, attribute, value, escape_value)
|
||||
yield from cls.iter_tags(tag_regex, html, matchfunc=matchfunc)
|
||||
|
||||
@classmethod
|
||||
def extract_attributes(cls, html):
|
||||
attr_dict = {}
|
||||
|
||||
def matchfunc(_tag, attrs):
|
||||
attr_dict.update(attrs)
|
||||
raise cls.AbortException()
|
||||
|
||||
with contextlib.suppress(cls.AbortException):
|
||||
cls(matchfunc).feed(html)
|
||||
|
||||
return attr_dict
|
||||
|
||||
@classmethod
|
||||
def get_elements_text_and_html_by_tag(cls, tag, html):
|
||||
return [tag.text_and_html() for tag in cls.tags_by_name(tag, html)]
|
||||
|
||||
@classmethod
|
||||
def get_element_text_and_html_by_tag(cls, tag, html):
|
||||
tag = next(cls.tags_by_name(tag, html), None)
|
||||
return tag and tag.text_and_html()
|
||||
|
||||
@classmethod
|
||||
def get_elements_text_and_html_by_attribute(cls, *args, **kwargs):
|
||||
return [tag.text_and_html() for tag in cls.tags_by_attribute(*args, **kwargs)]
|
||||
|
||||
@classmethod
|
||||
def get_elements_by_attribute(cls, *args, **kwargs):
|
||||
return [tag.text() for tag in cls.tags_by_attribute(*args, **kwargs)]
|
||||
|
||||
@classmethod
|
||||
def get_elements_html_by_attribute(cls, *args, **kwargs):
|
||||
return [tag.html() for tag in cls.tags_by_attribute(*args, **kwargs)]
|
||||
|
||||
@classmethod
|
||||
def get_element_by_attribute(cls, *args, **kwargs):
|
||||
tag = next(cls.tags_by_attribute(*args, **kwargs), None)
|
||||
return tag and tag.text()
|
||||
|
||||
@classmethod
|
||||
def get_element_html_by_attribute(cls, *args, **kwargs):
|
||||
tag = next(cls.tags_by_attribute(*args, **kwargs), None)
|
||||
return tag and tag.html()
|
||||
|
||||
@classmethod
|
||||
def get_elements_by_class(cls, class_name, html):
|
||||
value = cls.class_value_regex(class_name)
|
||||
return [tag.text() for tag
|
||||
in cls.tags_by_attribute('class', value, html, escape_value=False)]
|
||||
|
||||
@classmethod
|
||||
def get_elements_html_by_class(cls, class_name, html):
|
||||
value = cls.class_value_regex(class_name)
|
||||
return [tag.html() for tag
|
||||
in cls.tags_by_attribute('class', value, html, escape_value=False)]
|
||||
|
||||
@classmethod
|
||||
def get_elements_text_and_html_by_class(cls, class_name, html):
|
||||
value = cls.class_value_regex(class_name)
|
||||
return [tag.text_and_html() for tag
|
||||
in cls.tags_by_attribute('class', value, html, escape_value=False)]
|
||||
|
||||
@classmethod
|
||||
def get_element_html_by_class(cls, class_name, html):
|
||||
value = cls.class_value_regex(class_name)
|
||||
tag = next(cls.tags_by_attribute('class', value, html, escape_value=False), None)
|
||||
return tag and tag.html()
|
||||
|
||||
@classmethod
|
||||
def get_element_by_class(cls, class_name, html):
|
||||
value = cls.class_value_regex(class_name)
|
||||
tag = next(cls.tags_by_attribute('class', value, html, escape_value=False), None)
|
||||
return tag and tag.text()
|
|
@ -408,17 +408,13 @@ def close(self):
|
|||
pass
|
||||
|
||||
def handle_starttag(self, tag, _):
|
||||
self.tagstack.append(tag)
|
||||
self.tagstack.appendleft(tag)
|
||||
|
||||
def handle_endtag(self, tag):
|
||||
if not self.tagstack:
|
||||
raise compat_HTMLParseError('no tags in the stack')
|
||||
while self.tagstack:
|
||||
inner_tag = self.tagstack.pop()
|
||||
if inner_tag == tag:
|
||||
break
|
||||
else:
|
||||
raise compat_HTMLParseError(f'matching opening tag for closing {tag} tag not found')
|
||||
with contextlib.suppress(ValueError):
|
||||
self.tagstack.remove(tag)
|
||||
if not self.tagstack:
|
||||
raise self.HTMLBreakOnClosingTagException
|
||||
|
||||
|
@ -452,6 +448,8 @@ def find_or_raise(haystack, needle, exc):
|
|||
next_closing_tag_end = next_closing_tag_start + len(closing_tag)
|
||||
try:
|
||||
parser.feed(html[offset:offset + next_closing_tag_end])
|
||||
if tag not in parser.tagstack:
|
||||
raise HTMLBreakOnClosingTagParser.HTMLBreakOnClosingTagException()
|
||||
offset += next_closing_tag_end
|
||||
except HTMLBreakOnClosingTagParser.HTMLBreakOnClosingTagException:
|
||||
return html[content_start:offset + next_closing_tag_start], \
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Autogenerated by devscripts/update-version.py
|
||||
|
||||
__version__ = '2024.11.04'
|
||||
__version__ = '2024.11.18'
|
||||
|
||||
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
|
||||
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
|
||||
|
||||
VARIANT = None
|
||||
|
||||
|
@ -12,4 +12,4 @@
|
|||
|
||||
ORIGIN = 'yt-dlp/yt-dlp'
|
||||
|
||||
_pkg_version = '2024.11.04'
|
||||
_pkg_version = '2024.11.18'
|
||||
|
|
Loading…
Reference in New Issue
Block a user