Puppeteer: Scrolling down twitter timeline stopsStop setInterval call in JavaScriptHow to Check if element is visible after scrolling?Twitter image encoding challengeScroll to the top of the page using JavaScript/jQuery?Get selected text from a drop-down list (select box) using jQueryWhat's the shebang/hashbang (#!) in Facebook and new Twitter URLs for?jQuery scroll to elementHow to scroll down with Phantomjs to load dynamic contentWeb Automation to go to the next page - Invoking method that returns Promise within await block) - await is only valid in asyncHow to select a DOM Element to scroll on it in Puppeteer
"You are your self first supporter", a more proper way to say it
Explain the parameters before and after @ in the terminal prompt
Why can't I see bouncing of a switch on an oscilloscope?
I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine
What do you call a Matrix-like slowdown and camera movement effect?
Why not use SQL instead of GraphQL?
Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)
Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)
N.B. ligature in Latex
How do we improve the relationship with a client software team that performs poorly and is becoming less collaborative?
Why are only specific transaction types accepted into the mempool?
What is the command to reset a PC without deleting any files
Can I make popcorn with any corn?
How old can references or sources in a thesis be?
What do the dots in this tr command do: tr .............A-Z A-ZA-Z <<< "JVPQBOV" (with 13 dots)
How is it possible for user to changed after storage was encrypted? (on OS X, Android)
Draw simple lines in Inkscape
strToHex ( string to its hex representation as string)
Is there really no realistic way for a skeleton monster to move around without magic?
Mathematical cryptic clues
How can I hide my bitcoin transactions to protect anonymity from others?
Python: Add Submenu
Is there any sparring that doesn't involve punches to the head?
DOS, create pipe for stdin/stdout of command.com(or 4dos.com) in C or Batch?
Puppeteer: Scrolling down twitter timeline stops
Stop setInterval call in JavaScriptHow to Check if element is visible after scrolling?Twitter image encoding challengeScroll to the top of the page using JavaScript/jQuery?Get selected text from a drop-down list (select box) using jQueryWhat's the shebang/hashbang (#!) in Facebook and new Twitter URLs for?jQuery scroll to elementHow to scroll down with Phantomjs to load dynamic contentWeb Automation to go to the next page - Invoking method that returns Promise within await block) - await is only valid in asyncHow to select a DOM Element to scroll on it in Puppeteer
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I am having trouble with scraping all tweet URLs on a user timeline with puppeteer.
With puppeteer, the script is supposed to scroll down the timeline on each iteration of the while loop in the scrollToEnd
function until it hits the bottom. In order to monitor the progress, I made the script output the value of the previousHeight
variable, which is the current scrollheight
of document.body
evaluated everytime before the scrolling is executed.
However the scrolling stops once the output value turns 285,834. What's puzzling is that the script neither does break out of the while loop nor does the page.waitForFunction
method throw a timeout error.
How should I rewrite the scrollToEnd
function or any other part of the script so that the function ends properly?
Here is a snippet of my code. Irrelevant functions are left out for brevity.
const puppeteer = require('puppeteer');
var UserUrls = ['https://twitter.com/someuser'];
// more functions here
async function scrollToEnd(
page,
ScrollDelay = 1000
)
try
let previousHeight = 0;
let notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
while (notEnd)
previousHeight = await page.evaluate('document.body.scrollHeight');
await page.evaluate('window.scrollBy(0, document.body.scrollHeight)');
await page.waitFor(ScrollDelay);
notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
console.log(previousHeight)
;
return;
catch (e)
return;
;
;
(async () =>
const browser = await puppeteer.launch();
const page = await browser.newPage();
var tweetUrls = [];
for (let UserUrl of UserUrls)
await page.goto(UserUrl);
await page.evaluate((async () =>
await scrollToEnd(page);
)());
await page.screenshot( path: 'PageEnd.png' );
tweetUrls = await getTweetUrls(page, extractItems, 100);
;
await browser.close();
console.log(tweetUrls);
)();
javascript node.js twitter web-scraping puppeteer
add a comment |
I am having trouble with scraping all tweet URLs on a user timeline with puppeteer.
With puppeteer, the script is supposed to scroll down the timeline on each iteration of the while loop in the scrollToEnd
function until it hits the bottom. In order to monitor the progress, I made the script output the value of the previousHeight
variable, which is the current scrollheight
of document.body
evaluated everytime before the scrolling is executed.
However the scrolling stops once the output value turns 285,834. What's puzzling is that the script neither does break out of the while loop nor does the page.waitForFunction
method throw a timeout error.
How should I rewrite the scrollToEnd
function or any other part of the script so that the function ends properly?
Here is a snippet of my code. Irrelevant functions are left out for brevity.
const puppeteer = require('puppeteer');
var UserUrls = ['https://twitter.com/someuser'];
// more functions here
async function scrollToEnd(
page,
ScrollDelay = 1000
)
try
let previousHeight = 0;
let notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
while (notEnd)
previousHeight = await page.evaluate('document.body.scrollHeight');
await page.evaluate('window.scrollBy(0, document.body.scrollHeight)');
await page.waitFor(ScrollDelay);
notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
console.log(previousHeight)
;
return;
catch (e)
return;
;
;
(async () =>
const browser = await puppeteer.launch();
const page = await browser.newPage();
var tweetUrls = [];
for (let UserUrl of UserUrls)
await page.goto(UserUrl);
await page.evaluate((async () =>
await scrollToEnd(page);
)());
await page.screenshot( path: 'PageEnd.png' );
tweetUrls = await getTweetUrls(page, extractItems, 100);
;
await browser.close();
console.log(tweetUrls);
)();
javascript node.js twitter web-scraping puppeteer
add a comment |
I am having trouble with scraping all tweet URLs on a user timeline with puppeteer.
With puppeteer, the script is supposed to scroll down the timeline on each iteration of the while loop in the scrollToEnd
function until it hits the bottom. In order to monitor the progress, I made the script output the value of the previousHeight
variable, which is the current scrollheight
of document.body
evaluated everytime before the scrolling is executed.
However the scrolling stops once the output value turns 285,834. What's puzzling is that the script neither does break out of the while loop nor does the page.waitForFunction
method throw a timeout error.
How should I rewrite the scrollToEnd
function or any other part of the script so that the function ends properly?
Here is a snippet of my code. Irrelevant functions are left out for brevity.
const puppeteer = require('puppeteer');
var UserUrls = ['https://twitter.com/someuser'];
// more functions here
async function scrollToEnd(
page,
ScrollDelay = 1000
)
try
let previousHeight = 0;
let notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
while (notEnd)
previousHeight = await page.evaluate('document.body.scrollHeight');
await page.evaluate('window.scrollBy(0, document.body.scrollHeight)');
await page.waitFor(ScrollDelay);
notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
console.log(previousHeight)
;
return;
catch (e)
return;
;
;
(async () =>
const browser = await puppeteer.launch();
const page = await browser.newPage();
var tweetUrls = [];
for (let UserUrl of UserUrls)
await page.goto(UserUrl);
await page.evaluate((async () =>
await scrollToEnd(page);
)());
await page.screenshot( path: 'PageEnd.png' );
tweetUrls = await getTweetUrls(page, extractItems, 100);
;
await browser.close();
console.log(tweetUrls);
)();
javascript node.js twitter web-scraping puppeteer
I am having trouble with scraping all tweet URLs on a user timeline with puppeteer.
With puppeteer, the script is supposed to scroll down the timeline on each iteration of the while loop in the scrollToEnd
function until it hits the bottom. In order to monitor the progress, I made the script output the value of the previousHeight
variable, which is the current scrollheight
of document.body
evaluated everytime before the scrolling is executed.
However the scrolling stops once the output value turns 285,834. What's puzzling is that the script neither does break out of the while loop nor does the page.waitForFunction
method throw a timeout error.
How should I rewrite the scrollToEnd
function or any other part of the script so that the function ends properly?
Here is a snippet of my code. Irrelevant functions are left out for brevity.
const puppeteer = require('puppeteer');
var UserUrls = ['https://twitter.com/someuser'];
// more functions here
async function scrollToEnd(
page,
ScrollDelay = 1000
)
try
let previousHeight = 0;
let notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
while (notEnd)
previousHeight = await page.evaluate('document.body.scrollHeight');
await page.evaluate('window.scrollBy(0, document.body.scrollHeight)');
await page.waitFor(ScrollDelay);
notEnd = await page.waitForFunction(`document.body.scrollHeight > $previousHeight`);
console.log(previousHeight)
;
return;
catch (e)
return;
;
;
(async () =>
const browser = await puppeteer.launch();
const page = await browser.newPage();
var tweetUrls = [];
for (let UserUrl of UserUrls)
await page.goto(UserUrl);
await page.evaluate((async () =>
await scrollToEnd(page);
)());
await page.screenshot( path: 'PageEnd.png' );
tweetUrls = await getTweetUrls(page, extractItems, 100);
;
await browser.close();
console.log(tweetUrls);
)();
javascript node.js twitter web-scraping puppeteer
javascript node.js twitter web-scraping puppeteer
edited Mar 9 at 9:42
vsemozhetbyt
2,316711
2,316711
asked Mar 9 at 3:35
figmentfigment
194
194
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Could you try one of these two approaches? This script tries to scroll to the bottom by comparing scroll heights (as you did) or waiting for the element marking the stream end to be visible. All scroll logic is placed inside functions evaluated in the browser context. Both functions return tweet count in the full page to compare the result with the user tweet count declared at the top of the timeline. Also, I've changed the delay to 3 sec for the first approach as it seems sometimes 1 sec is a too small amount for scroll height to be changed.
'use strict';
const puppeteer = require('puppeteer');
(async function main()
try
const browser = await puppeteer.launch( headless: false );
const [page] = await browser.pages();
await page.goto('https://twitter.com/GHchangelog');
const data1 = await page.evaluate(scrollToBottomByMaxHeight);
console.log(`Tweets: $data1`);
await page.goto('https://twitter.com/GHchangelog');
const data2 = await page.evaluate(scrollToBottomByEndElement);
console.log(`Tweets: $data2`);
// await browser.close();
catch (err)
console.error(err);
)();
async function scrollToBottomByMaxHeight()
try
let previousHeight = 0;
let currentHeight = document.scrollingElement.scrollHeight;
while (previousHeight < currentHeight)
previousHeight = document.scrollingElement.scrollHeight;
window.scrollBy(0, previousHeight);
await new Promise((resolve) => setTimeout(resolve, 3000); );
currentHeight = document.scrollingElement.scrollHeight;
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
async function scrollToBottomByEndElement()
try
const endElement = document.querySelector('div.stream-end');
while (endElement.clientHeight === 0)
window.scrollBy(0, document.scrollingElement.scrollHeight);
await new Promise((resolve) => setTimeout(resolve, 1000); );
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
1
scrollToBottomByMaxHeight
function returned 40 whilescrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as theclientHeight
should stay at 0 until thediv.stream-end
element is loaded?
– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figmentdiv.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then itsclientHeight
is 0.
– vsemozhetbyt
Mar 9 at 11:02
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55073738%2fpuppeteer-scrolling-down-twitter-timeline-stops%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Could you try one of these two approaches? This script tries to scroll to the bottom by comparing scroll heights (as you did) or waiting for the element marking the stream end to be visible. All scroll logic is placed inside functions evaluated in the browser context. Both functions return tweet count in the full page to compare the result with the user tweet count declared at the top of the timeline. Also, I've changed the delay to 3 sec for the first approach as it seems sometimes 1 sec is a too small amount for scroll height to be changed.
'use strict';
const puppeteer = require('puppeteer');
(async function main()
try
const browser = await puppeteer.launch( headless: false );
const [page] = await browser.pages();
await page.goto('https://twitter.com/GHchangelog');
const data1 = await page.evaluate(scrollToBottomByMaxHeight);
console.log(`Tweets: $data1`);
await page.goto('https://twitter.com/GHchangelog');
const data2 = await page.evaluate(scrollToBottomByEndElement);
console.log(`Tweets: $data2`);
// await browser.close();
catch (err)
console.error(err);
)();
async function scrollToBottomByMaxHeight()
try
let previousHeight = 0;
let currentHeight = document.scrollingElement.scrollHeight;
while (previousHeight < currentHeight)
previousHeight = document.scrollingElement.scrollHeight;
window.scrollBy(0, previousHeight);
await new Promise((resolve) => setTimeout(resolve, 3000); );
currentHeight = document.scrollingElement.scrollHeight;
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
async function scrollToBottomByEndElement()
try
const endElement = document.querySelector('div.stream-end');
while (endElement.clientHeight === 0)
window.scrollBy(0, document.scrollingElement.scrollHeight);
await new Promise((resolve) => setTimeout(resolve, 1000); );
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
1
scrollToBottomByMaxHeight
function returned 40 whilescrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as theclientHeight
should stay at 0 until thediv.stream-end
element is loaded?
– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figmentdiv.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then itsclientHeight
is 0.
– vsemozhetbyt
Mar 9 at 11:02
add a comment |
Could you try one of these two approaches? This script tries to scroll to the bottom by comparing scroll heights (as you did) or waiting for the element marking the stream end to be visible. All scroll logic is placed inside functions evaluated in the browser context. Both functions return tweet count in the full page to compare the result with the user tweet count declared at the top of the timeline. Also, I've changed the delay to 3 sec for the first approach as it seems sometimes 1 sec is a too small amount for scroll height to be changed.
'use strict';
const puppeteer = require('puppeteer');
(async function main()
try
const browser = await puppeteer.launch( headless: false );
const [page] = await browser.pages();
await page.goto('https://twitter.com/GHchangelog');
const data1 = await page.evaluate(scrollToBottomByMaxHeight);
console.log(`Tweets: $data1`);
await page.goto('https://twitter.com/GHchangelog');
const data2 = await page.evaluate(scrollToBottomByEndElement);
console.log(`Tweets: $data2`);
// await browser.close();
catch (err)
console.error(err);
)();
async function scrollToBottomByMaxHeight()
try
let previousHeight = 0;
let currentHeight = document.scrollingElement.scrollHeight;
while (previousHeight < currentHeight)
previousHeight = document.scrollingElement.scrollHeight;
window.scrollBy(0, previousHeight);
await new Promise((resolve) => setTimeout(resolve, 3000); );
currentHeight = document.scrollingElement.scrollHeight;
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
async function scrollToBottomByEndElement()
try
const endElement = document.querySelector('div.stream-end');
while (endElement.clientHeight === 0)
window.scrollBy(0, document.scrollingElement.scrollHeight);
await new Promise((resolve) => setTimeout(resolve, 1000); );
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
1
scrollToBottomByMaxHeight
function returned 40 whilescrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as theclientHeight
should stay at 0 until thediv.stream-end
element is loaded?
– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figmentdiv.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then itsclientHeight
is 0.
– vsemozhetbyt
Mar 9 at 11:02
add a comment |
Could you try one of these two approaches? This script tries to scroll to the bottom by comparing scroll heights (as you did) or waiting for the element marking the stream end to be visible. All scroll logic is placed inside functions evaluated in the browser context. Both functions return tweet count in the full page to compare the result with the user tweet count declared at the top of the timeline. Also, I've changed the delay to 3 sec for the first approach as it seems sometimes 1 sec is a too small amount for scroll height to be changed.
'use strict';
const puppeteer = require('puppeteer');
(async function main()
try
const browser = await puppeteer.launch( headless: false );
const [page] = await browser.pages();
await page.goto('https://twitter.com/GHchangelog');
const data1 = await page.evaluate(scrollToBottomByMaxHeight);
console.log(`Tweets: $data1`);
await page.goto('https://twitter.com/GHchangelog');
const data2 = await page.evaluate(scrollToBottomByEndElement);
console.log(`Tweets: $data2`);
// await browser.close();
catch (err)
console.error(err);
)();
async function scrollToBottomByMaxHeight()
try
let previousHeight = 0;
let currentHeight = document.scrollingElement.scrollHeight;
while (previousHeight < currentHeight)
previousHeight = document.scrollingElement.scrollHeight;
window.scrollBy(0, previousHeight);
await new Promise((resolve) => setTimeout(resolve, 3000); );
currentHeight = document.scrollingElement.scrollHeight;
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
async function scrollToBottomByEndElement()
try
const endElement = document.querySelector('div.stream-end');
while (endElement.clientHeight === 0)
window.scrollBy(0, document.scrollingElement.scrollHeight);
await new Promise((resolve) => setTimeout(resolve, 1000); );
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
Could you try one of these two approaches? This script tries to scroll to the bottom by comparing scroll heights (as you did) or waiting for the element marking the stream end to be visible. All scroll logic is placed inside functions evaluated in the browser context. Both functions return tweet count in the full page to compare the result with the user tweet count declared at the top of the timeline. Also, I've changed the delay to 3 sec for the first approach as it seems sometimes 1 sec is a too small amount for scroll height to be changed.
'use strict';
const puppeteer = require('puppeteer');
(async function main()
try
const browser = await puppeteer.launch( headless: false );
const [page] = await browser.pages();
await page.goto('https://twitter.com/GHchangelog');
const data1 = await page.evaluate(scrollToBottomByMaxHeight);
console.log(`Tweets: $data1`);
await page.goto('https://twitter.com/GHchangelog');
const data2 = await page.evaluate(scrollToBottomByEndElement);
console.log(`Tweets: $data2`);
// await browser.close();
catch (err)
console.error(err);
)();
async function scrollToBottomByMaxHeight()
try
let previousHeight = 0;
let currentHeight = document.scrollingElement.scrollHeight;
while (previousHeight < currentHeight)
previousHeight = document.scrollingElement.scrollHeight;
window.scrollBy(0, previousHeight);
await new Promise((resolve) => setTimeout(resolve, 3000); );
currentHeight = document.scrollingElement.scrollHeight;
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
async function scrollToBottomByEndElement()
try
const endElement = document.querySelector('div.stream-end');
while (endElement.clientHeight === 0)
window.scrollBy(0, document.scrollingElement.scrollHeight);
await new Promise((resolve) => setTimeout(resolve, 1000); );
return document.querySelectorAll('a.js-permalink').length;
catch (err)
return err;
edited Mar 9 at 9:38
answered Mar 9 at 9:30
vsemozhetbytvsemozhetbyt
2,316711
2,316711
1
scrollToBottomByMaxHeight
function returned 40 whilescrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as theclientHeight
should stay at 0 until thediv.stream-end
element is loaded?
– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figmentdiv.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then itsclientHeight
is 0.
– vsemozhetbyt
Mar 9 at 11:02
add a comment |
1
scrollToBottomByMaxHeight
function returned 40 whilescrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as theclientHeight
should stay at 0 until thediv.stream-end
element is loaded?
– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figmentdiv.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then itsclientHeight
is 0.
– vsemozhetbyt
Mar 9 at 11:02
1
1
scrollToBottomByMaxHeight
function returned 40 while scrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as the clientHeight
should stay at 0 until the div.stream-end
element is loaded?– figment
Mar 9 at 10:26
scrollToBottomByMaxHeight
function returned 40 while scrollToBottomByEndElement
function returned 186. Am I right to conclude the latter is a more reliable approach as the clientHeight
should stay at 0 until the div.stream-end
element is loaded?– figment
Mar 9 at 10:26
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figment I had 186 from both constantly, but it seems the first approach is more brittle as it depends on the network responsiveness (you can try to increase the delay to 10 sec to see if something changes). So I think, yes, the second approach is more reliable.
– vsemozhetbyt
Mar 9 at 10:59
@figment
div.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then its clientHeight
is 0.– vsemozhetbyt
Mar 9 at 11:02
@figment
div.stream-end
is already loaded at the initial page state, it is just hidden till the end of stream is reached, and till then its clientHeight
is 0.– vsemozhetbyt
Mar 9 at 11:02
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55073738%2fpuppeteer-scrolling-down-twitter-timeline-stops%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown