Пропарсить страничку на Perl

Web/сайты Прочее

Был(а) онлайн: 26.04.20 14:45
Umen 26 лет

1.0 Был(а) онлайн: 26.04.20 14:45

Недавно
The target is a single website and only requires spidering one level deep. This application will be in Perl


----
1) Create a queue of URLs by scanning http://www.Fark.com (front page only) for URLs to the comment forums for each topic.
-The Fark.com front page displays the number of comments in each forum.
-Please store this number, along with its respective forum URL, inside a ftext file ( forums.txt), one entry per line, in the following form:
[forum URL, number of comments, timedate stamp] example: [http://forums.fark.com/cgi/fark/comments.pl?IDLink=2337513 , 60, 18:55:36 10-10-2006]

2) Pull a URL out of the queue and fetch the HTML page which can be found at that location.
-To be well behaved, do not do this (skip the entry) if the number of comments for a forum hasn't changed since the last scrape.

3) Scan the HTML page looking for image links following "<img src=".
-Do not download the image, I just want the link.
-Do not store image links that contain the word "fark" in the domain portion of the image link.

4) Write these image links into a text file (images.txt), one entry per line, in the following form:
[image link, forum URL] example: [ http://members.cox.net/jboy820/oneflew1.jpg, ]http://forums.fark.com/cgi/fark/comments.pl?IDLink=2337513]

----
обязатлеьно указывайте
- свой icq и время связи
- сколько времени у Вас уйдет на эту задачу

Чтобы добавить заявку к этому заказу, нужно войти или зарегистрироваться

Мой блок

26.04.20 14:45
Umen 26