Crawling Password Protected Websites Online

In previous tutorials, we've learned how to extract data from public websites, writing CSS selectors, pagination and more. In this tutorial, we will learn how to crawl a password protected website online using Agenty.

To crawl a password protected websites using Agenty, first of all we must get authenticated our scraping agent with username, password and then we can scrape the internal pages as we do with public websites. Scraping the web with Agenty, hosted app is pretty easy and quick to setup using the extension and then with agent editor. This tutorial shows, how to get data from a password protected website after login successfully and then schedule the scraper, to automate your data scraping task.

Form authentication

The Form-based authentication is most widely used website protection technique, where the websites display an HTML web form to fill in the username, password and submit in order to login and access the secure pages or service. A password protected website with form-authentication scraping workflow looks like below :

  1. Navigate to login page.
  2. Enter the username in input filed
  3. Enter the password in input field
  4. Click on the login button
  5. Start scraping internal pages.

The Agenty form-authentication engine has the following commands to interact with a login page using CSS Selectors and to complete the initial #1, #2, #3 and #4 login steps prior to start scraping internal pages.

Command Description Required*
NAVIGATE To navigate on a webpage
  1. Value : A URL to navigate
TYPE To type something in a text box
  1. CSS selectors : Selector of text box.
  2. Value : Value to enter in the text box.


To click on a button or link

  1. CSS selectors : Selector of button/link need to be clicked
WAIT To wait (n) seconds
  1. Value : Value of seconds(int) to wait
SelectByValue Select an item from dropdown list
  1. CSS selectors : Selector of dropdown box.
  2. Value : Value needs to be selected.
JsClick The JavaScript click button
  1. CSS selectors : Selector of JavaScript button.
SubmitForm The submit button for a web form
  1. CSS selectors : Selector of submit button.

To enable the scrape data from behind a login, we'd need to edit the agent in agent editor and enable the "Login to website" feature step-by-step mentioned below : 

  1. Scroll down to bottom of the agent page and click on "Edit agent" button
    edit scraping agent
  2. Go to "Password Authentication" tab and "Enable login to website" as in the screenshot below.

Now go to website you want to login, and check the web page source to analyze the login form. For this tutorial, I'm going to use this example "" website where the login form HTML looks like below.

In order to crawl this password protected website, we need to navigate to login page and enter username, password and then click on the submit button to get authenticated and then we can access the internal pages. So, the agent login events will looks like below :

  • Navigate to (
  • Enter user name on text box with CSS selector #Login1_UserName
  • Enter password on text box with CSS selector #Login1_Password
  • Click on the Sign In button with CSS selector #Login1_LoginButton

CSS selector can be written with name, class or Id. For example, to click on the "Sign In" button all these selectors are valid.

  • #Login1_LoginButton
  • .submit
  • input.submit

Enable login to site for crawling

Once the configuration part is completed, save the scraping agent, and go to main agent page to start and test your agent. To test if login is working correctly or not, I entered some internal urls in URL List which was accessible after login only, and then started the scraping job.

It's always the best practice to run a test job for lower number of URLs, when the agent configuration is changed. As that will allow to analyze the result to ensure everything is working as expected, instead starting the agent for entire big list of URLs.

password protected site scraping

It took few seconds to initialize and login, then scraping started for internal pages, and we can see the progress and the final result as per your fields selection in the result output table.

crawled data behind login

Basic Authentication

The HTTP basic authentication is a simple challenge which a web server can request authentication information (typically a User ID and Password) from a client. These website don't have html web form to type credentials or select using CSS selectors; and then click on submit button. Instead the browser open a popup dialog(as in screenshot below) asking for credentials when you visit the secured pages and then the browser use those credentials to convert into a base64 string and send the Authorization header to server to attempt the login.

Basic authentication

In order to crawl the basic-authentication protected websites, we need to use the "Basic-authentication" feature of scraping agent following the steps below :

  1. Edit the scraping agent
  2. Go to "Password Authentication" tab
  3. Enable the "Login to website" feature and select the "Authentication type" as "Basic-authentication"
  4. Enter the Domain, Username and password in next section and save the agent back.
  5. Go back to main agent  and re-run.
  6. See the output or logs to ensure the agent is able to login successfully.

The scraping agent automatically logout to the domain after 20 minutes of in-activity on same domain or if the scraping jobs is completed prior to that. So, if you are using throttling feature to delay in sequential requests, be sure there is no gap of more then 20 minutes.

crawl basic authentication protected website

Basic Authentication with FORM

We can also get our agent session authenticated by sending a Navigate request with username and password. Just make a first request using form authentication with URL format below : 


For example :

basic authentication website crawling with form


  • When crawling password protected websites, we recommend to spent some time in analysis first, and try to use the specific login page of website instead dialog box or popup login when possible. You may find that by logging in and then logging out, most of the website auto-redirect users on specific login page when logged out.
  • Add a 5-10 seconds of wait after clicking on "Login" button or form submission to give enough time to website to auto redirect.
  • If the website requires AJAX, JavaScript ? Go to Ajax and Pagination tab > Web Browser and Enable the fetch page by AJAX/JavaScript enabled browser option.
    Note : Enable the JavaScript browser only if you need it really, because it may slow down the crawling since the extractor waits for the complete page load including internal/external JavaScript on particular web page and you might need to increase your timeout setting as well, if the website is slow to allow more time to load the entire content. (Go to Advance options > Headers > Increase the connection timeout to something higher then default 6 seconds)

Wants to extract data behind login? Let the Agenty team setup, execute and maintain your data scraping project - Request a quote