
This repository provides a Python-based solution to scrape detailed public LinkedIn profile data using the Crawlbase Crawling API.
It includes:
- A scraper that sends asynchronous requests to extract profile data.
- A retrieval script that fetches the final structured data using the request ID (RID).
📖 Read the full tutorial: How to Scrape LinkedIn
crawlbase
– for accessing Crawling and Storage APIsjson
– for working with JSON responsesPython 3.6+
Install the required package:
pip install crawlbase
File: linkedin_profile_scraper.py
- Sends an asynchronous scraping request to a public LinkedIn profile URL.
- Returns a rid (request ID) used to retrieve the data later.
- Replace YOUR_API_TOKEN with your Crawlbase token.
- Set the LinkedIn profile URL.
python linkedin_profile_scraper.py
{
"rid": "1dd4453c6f6bd93baf1d7e03"
}
File: linkedin_profile_retrieve.py
Uses the rid from the scraper to fetch and print the structured LinkedIn profile data.
Replace YOUR_API_TOKEN
and RID
in the script.
python linkedin_profile_retrieve.py
{
"title": "Kaitlyn Owen",
"headline": "",
"sublines": ["Miami-Fort Lauderdale Area", "5K followers", "500+ connections"],
"location": "Miami-Fort Lauderdale Area",
...
}
- Add batching for scraping multiple profile URLs
- Save scraped data to CSV or JSON
- Add command-line interface (CLI) support
- Retry mechanism for failed requests
- Track hiring trends and talent movement
- Gather public profile data for market research
- Build datasets for recruitment or networking tools