Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Links are now extracted after applying excludeTags #828

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion apps/api/src/scraper/WebScraper/single_url.ts
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ export async function scrapSingleUrl(
let linksOnPage: string[] | undefined;

if (pageOptions.includeLinks) {
linksOnPage = extractLinks(rawHtml, urlToScrap);
linksOnPage = extractLinks(html, urlToScrap);
Copy link
Member

@nickscamara nickscamara Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem with this is that we rely on this function for our /crawl, which will end up failling to grab all the links if we don't pass the raw version

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mogery lmk if im wrong

Copy link
Author

@txrp0x9 txrp0x9 Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A global code search did not reveal any usage of the linksOnPage field anywhere other than api return
I believe the crawler uses a separate extractLinksFromHTML function

crawler.extractLinksFromHTML(rawHtml ?? "", sc.originUrl),
and
links.push(...this.extractLinksFromHTML(content, url).map(url => ({ url, html: content, pageStatusCode, pageError })));

defined as
public extractLinksFromHTML(html: string, url: string) {
let links: string[] = [];
const $ = load(html);
$("a").each((_, element) => {
const href = $(element).attr("href");
if (href) {
const u = this.filterURL(href, url);
if (u !== null) {
links.push(u);
}
}
});
return links;
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right makes sense.

}

let document: Document = {
Expand Down