Still a little clumsy, but...
#!/bin/bash
arg=$(echo $1 | sed -e 's/v=.*$/v=/')
if ! [ "$arg" = "http://www.youtube.com/watch?v=" ]; then
echo "Not a YouTube URL: $1"
exit 1
fi
target=$(echo $1 | sed -e 's/^.*v=//')
wget $1 -O /tmp/fetched_from_youtube.data
# @@@ below is (all on one line, blogger fails to handle well):
# if($1 == "-" && $2 != "") {
# sub ("^[^A-Z]*","");
# sub("\\(.*\\).*$","");
# gsub("[[:punct:]]","");
# sub(" +$","");
# gsub(" +","_");
# sub("_\\.",".");
# print $0;
# }
title=$(gawk '/<.title>/ { x=0; } x > 0 { @@@ } // { x=2; }' /tmp/fetched_from_youtube.data)
rm -f /tmp/fetched_from_youtube.data
python youtube-dl.py $target
ffmpeg -i ${target}.flv -ab 56 -ar 22050 -b 500 -s 320x240 ${title}_${RANDOM}.mpg
rm -f ${target}.flv
exit 0
...it works for me & does emit status which the wrapper script can check at will.
The _${RANDOM} is in there to make a distinction between clips which happen to have identical (after filtering) titles. youtube-dl.py is the script downloaded from the link on this post.
Next, time permitting, I plan to enhance the wrapper script so it erases each line in the pending-downloads file it’s reading in the instant it completes the download. This because a long list of YouTube downloads may sit there perennially being re-downloaded, plus whatever other stuff requires special download techniques may also be the same.
Yes, I could vandalise the Python script to resume rather than doing a fresh fetch, however I find the KISS principle very useful in general (that would make my modded version different from the original, if the original is improved I can’t just download the improved version), so I stay with that as much as possible.
#!/bin/bash
arg=$(echo $1 | sed -e 's/v=.*$/v=/')
if ! [ "$arg" = "http://www.youtube.com/watch?v=" ]; then
echo "Not a YouTube URL: $1"
exit 1
fi
target=$(echo $1 | sed -e 's/^.*v=//')
wget $1 -O /tmp/fetched_from_youtube.data
# @@@ below is (all on one line, blogger fails to handle well):
# if($1 == "-" && $2 != "") {
# sub ("^[^A-Z]*","");
# sub("\\(.*\\).*$","");
# gsub("[[:punct:]]","");
# sub(" +$","");
# gsub(" +","_");
# sub("_\\.",".");
# print $0;
# }
title=$(gawk '/<.title>/ { x=0; } x > 0 { @@@ } /
rm -f /tmp/fetched_from_youtube.data
python youtube-dl.py $target
ffmpeg -i ${target}.flv -ab 56 -ar 22050 -b 500 -s 320x240 ${title}_${RANDOM}.mpg
rm -f ${target}.flv
exit 0
...it works for me & does emit status which the wrapper script can check at will.
The _${RANDOM} is in there to make a distinction between clips which happen to have identical (after filtering) titles. youtube-dl.py is the script downloaded from the link on this post.
Next, time permitting, I plan to enhance the wrapper script so it erases each line in the pending-downloads file it’s reading in the instant it completes the download. This because a long list of YouTube downloads may sit there perennially being re-downloaded, plus whatever other stuff requires special download techniques may also be the same.
Yes, I could vandalise the Python script to resume rather than doing a fresh fetch, however I find the KISS principle very useful in general (that would make my modded version different from the original, if the original is improved I can’t just download the improved version), so I stay with that as much as possible.
Comments
http://bitbucket.org/rg3/youtube-dl/wiki/Home
Brendan