To effectively scrape news and articles data, here are the detailed steps: start by understanding the target website’s structure, then select the right tools for the job, define your scraping strategy, handle common challenges like anti-bot measures, and finally, process and store the extracted data. For instance, using Python libraries like Beautiful Soup for parsing HTML and Requests for making HTTP requests is a common and robust approach. For dynamic content loaded via JavaScript, tools like Selenium or Playwright are indispensable. Always remember to check the website’s robots.txt
file e.g., https://example.com/robots.txt
and terms of service to ensure compliance and ethical data collection.
Skip the hassle and get the ready to use 100% working script (Link in the comments section of the YouTube Video) (Latest test 31/05/2025)
The Art of Ethical Data Extraction: Understanding Web Scraping Fundamentals
Web scraping, at its core, is the automated extraction of data from websites.
0.0 out of 5 stars (based on 0 reviews)
There are no reviews yet. Be the first one to write one. |
Amazon.com:
Check Amazon for How to scrape Latest Discussions & Reviews: |
For news and articles, this means pulling headlines, publication dates, author names, full article text, and even associated images or videos.
It’s a powerful technique, but it comes with a significant responsibility to act ethically and legally.
Think of it like a carefully executed data expedition, not a digital smash-and-grab.
What Exactly is Web Scraping?
Web scraping involves using software to simulate a human browsing the web, but at an incredibly fast and efficient pace. Is it legal to scrape amazon data
Instead of reading articles one by one, a scraper can visit thousands of pages, identify specific elements, and collect the information you need.
- Automated Data Collection: This is the primary function, automating repetitive tasks that would take humans countless hours.
- Information Synthesis: Once collected, the data can be analyzed, categorized, and used for various purposes like trend analysis or content aggregation.
- Structured Data from Unstructured Sources: Websites are typically unstructured text and images. Scraping helps turn this into usable, structured data, often in formats like CSV, JSON, or databases.
The Importance of Adhering to robots.txt
and Terms of Service
This isn’t just a suggestion. it’s a foundational principle.
Ignoring robots.txt
is akin to ignoring a “No Trespassing” sign, and violating terms of service can lead to legal repercussions. Always check these first.
robots.txt
: This file, usually found at the root of a domain e.g.,https://www.nytimes.com/robots.txt
, tells web crawlers which parts of the site they are allowed to access or not access. It’s a voluntary directive, but respecting it is a sign of good faith and professionalism.User-agent: *
applies to all botsDisallow: /private/
do not crawl this directoryCrawl-delay: 10
wait 10 seconds between requests
- Terms of Service ToS: Many websites explicitly forbid automated scraping in their ToS. Violating these can result in your IP being banned, or worse, legal action. Always read them carefully. If a website explicitly forbids scraping, it is imperative to respect their wishes. There are many other avenues for data collection that do not involve violating a website’s expressed policies.
When to Seek Alternatives to Direct Scraping
While scraping is powerful, it’s not always the best or most ethical solution. Consider these alternatives first.
- Official APIs: The most robust and ethical approach. Many news organizations, like The New York Times developer.nytimes.com or NewsAPI newsapi.org, offer official APIs specifically designed for developers to access their content. These APIs provide structured, clean data without the need for scraping, often with clear usage limits and terms. This is the preferred method as it respects the content creator’s efforts and often provides richer, more reliable data.
- Data Providers/Aggregators: Services like Factiva, LexisNexis, or specialized news data providers offer curated news feeds and archives. While often paid, they provide legally acquired, high-quality data.
- RSS Feeds: Many news sites still offer RSS feeds, which provide a structured XML output of recent articles. This is a simple, legitimate way to get headlines and summaries.
- Public Datasets: Check platforms like Kaggle or Google Dataset Search for pre-scraped or publicly available news datasets. For example, the “News Category Dataset” on Kaggle offers 200,000 news headlines from 2017 to 2018 with categories.
Setting Up Your Scraping Environment
Once you’ve decided that scraping is the appropriate and ethical path, setting up your environment correctly is the next crucial step. How to scrape shein data in easy steps
This involves selecting your tools and ensuring you have the necessary foundations.
Choosing the Right Programming Language and Libraries
Python is the undisputed champion for web scraping due to its simplicity, vast ecosystem of libraries, and strong community support.
- Python: The de facto standard. Its readability and extensive library support make it ideal for beginners and experts alike.
requests
: For making HTTP requests to fetch web page content. It’s simple, elegant, and handles most basic needs.import requests
response = requests.get'https://example.com/news'
html_content = response.text
Beautiful Soup 4
bs4: A fantastic library for parsing HTML and XML documents. It creates a parse tree that you can navigate, search, and modify.from bs4 import BeautifulSoup
soup = BeautifulSouphtml_content, 'html.parser'
title = soup.find'h1'.text
lxml
: A fast and powerful XML and HTML parsing library, often used as the parser backend for Beautiful Soup for improved performance.Selenium
: For scraping dynamic content that requires JavaScript execution. Selenium automates web browsers like Chrome or Firefox, mimicking human interaction.from selenium import webdriver
driver = webdriver.Chrome
driver.get'https://example.com/dynamic-news'
page_source = driver.page_source
Playwright
: A newer, very capable alternative to Selenium, also for headless browser automation. It supports multiple languages and offers excellent performance.from playwright.sync_api import sync_playwright
with sync_playwright as p:
browser = p.chromium.launch
page = browser.new_page
page.goto'https://example.com/dynamic-news'
content = page.content
Scrapy
: A powerful, high-level web crawling and scraping framework. It’s ideal for large-scale, complex scraping projects, offering built-in features for handling requests, pipelines, and more.- Best for: Large-scale projects, data integrity, handling persistent connections, and more complex crawling logic.
- Learning Curve: Steeper than
requests
+Beautiful Soup
, but pays off for ambitious projects.
Essential Tools and Setup
Beyond libraries, some tools are crucial for a smooth scraping experience.
- Integrated Development Environment IDE:
- VS Code: Highly recommended for its versatility, extensions, and excellent Python support.
- PyCharm: A dedicated Python IDE, great for larger projects.
- Browser Developer Tools: Your best friend for inspecting web page elements.
- Chrome DevTools Inspect Element: Right-click on any element on a web page and select “Inspect” or “Inspect Element” to see its HTML, CSS, and JavaScript. This is how you identify the tags, classes, and IDs you’ll target with your scraper.
- Firefox Developer Tools: Similar functionality, equally powerful.
- Virtual Environments: Crucial for managing project dependencies and avoiding conflicts.
python -m venv venv
source venv/bin/activate
Linux/macOS or.\venv\Scripts\activate
Windowspip install requests beautifulsoup4 selenium
- Headless Browsers for Selenium/Playwright: If using
Selenium
orPlaywright
, you’ll often run them in “headless” mode, meaning the browser runs in the background without a visible GUI, which is more efficient for scraping servers.options = webdriver.ChromeOptions
options.add_argument'--headless'
driver = webdriver.Chromeoptions=options
Proxy Servers and User-Agent Rotation Advanced
For more robust scraping, especially against sophisticated anti-bot measures, these become essential.
- Proxy Servers: Route your requests through different IP addresses to avoid getting blocked by target websites that track IP addresses.
- Rotating Proxies: A service that provides a pool of IP addresses, cycling through them for each request.
- Ethical Use: Only use proxies for legitimate purposes. Abusing them can still lead to blocks or other issues.
- User-Agent Rotation: Websites often block requests coming from common bot user-agents. Mimicking different browser user-agents e.g., Chrome on Windows, Firefox on macOS can help bypass these blocks.
headers = {'User-Agent': 'Mozilla/5.0 Windows NT 10.0. Win64. x64 AppleWebKit/537.36 KHTML, like Gecko Chrome/91.0.4472.124 Safari/537.36'}
response = requests.geturl, headers=headers
- Rate Limiting: Crucial for being a good internet citizen. Sending too many requests too quickly can overload a server and get your IP banned. Implement delays between requests.
import time
time.sleep2
wait 2 seconds between requests
Crafting Your Scraping Strategy: From Inspection to Extraction
This is where the rubber meets the road. How to scrape foursquare data easily
You need a clear plan of attack for each website you intend to scrape, starting with meticulous inspection and leading to precise data extraction.
Step 1: Inspecting the Website Structure
Before writing any code, thoroughly examine the target news website.
This is paramount for identifying the patterns and unique identifiers of the data you want.
- Identify Target Elements: What precisely do you want to scrape? Headlines, article text, publication dates, author names, image URLs, categories, comments?
- For a news article, you’ll typically look for:
<h1>
or<div class="article-title">
for the headline.<p>
tags within a main articlediv
for the body text.<span class="pub-date">
or similar for the publication date.<img src="some-image.jpg">
for images.
- For a news article, you’ll typically look for:
- Use Browser Developer Tools Chrome/Firefox DevTools:
-
Right-click on the element you want to scrape e.g., an article headline.
-
Select “Inspect” or “Inspect Element”. How to scrape flipkart data
-
This opens the Developer Tools, highlighting the HTML code corresponding to that element.
-
Look for unique identifiers:
* IDsid="uniqueID"
: Best for unique elements, as IDs should be unique on a page.
* Classesclass="some-class another-class"
: Common for groups of elements e.g., all article headlines might share a class likearticle-header
.
* Tag Names<div>
,<span>
,<a>
: Useful when combined with classes or IDs.
* Attributeshref
,src
,data-id
: For extracting links, image sources, or custom data attributes.
-
- Observe HTML Patterns:
- Are all articles listed under a common
div
with a specific class? - Does each article summary or link have a consistent structure?
- Are there pagination links or “Load More” buttons that need to be handled for deep scraping?
- Are all articles listed under a common
- Examine Network Requests: In DevTools, go to the “Network” tab. This helps identify if data is loaded dynamically via JavaScript XHR/Fetch requests. If you see data loaded through these requests, you might need Selenium/Playwright or directly target the API endpoint if it’s publicly accessible.
Step 2: Sending HTTP Requests
Once you know the URL and the type of content static vs. dynamic, you send your request.
-
For Static Content
requests
library:import requests url = 'https://www.example-news.com/tech' try: response = requests.geturl, timeout=10 # Set a timeout response.raise_for_status # Raise an HTTPError for bad responses 4xx or 5xx html_content = response.text printf"Successfully fetched {lenhtml_content} bytes from {url}" except requests.exceptions.RequestException as e: printf"Error fetching {url}: {e}" html_content = None
-
For Dynamic Content
Selenium
orPlaywright
:
from selenium import webdriver How to build a news aggregator with text classificationFrom selenium.webdriver.chrome.service import Service
From webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import ByFrom selenium.webdriver.support.ui import WebDriverWait
From selenium.webdriver.support import expected_conditions as EC
Url = ‘https://www.dynamic-news-site.com/latest‘ How to get images from any website
Setup Chrome options for headless browsing
options = webdriver.ChromeOptions
options.add_argument’–headless’
options.add_argument’–disable-gpu’ # Recommended for headless on some systems
options.add_argument’–no-sandbox’ # Bypass OS security model
options.add_argument’–disable-dev-shm-usage’ # Overcome limited resource problemsInitialize the WebDriver
service = ServiceChromeDriverManager.install driver = webdriver.Chromeservice=service, options=options driver.geturl # Wait for the specific element that contains the article list to be present # This is crucial for dynamic content WebDriverWaitdriver, 20.until EC.presence_of_element_locatedBy.CSS_SELECTOR, 'div.article-list-container' html_content = driver.page_source printf"Successfully loaded dynamic content from {url}"
except Exception as e:
printf"Error loading dynamic content from {url}: {e}"
finally:
if ‘driver’ in locals and driver:
driver.quit # Always close the browser- Note:
webdriver_manager
simplifies driver setup. ForPlaywright
, the syntax is similar as shown in the previous section.
- Note:
Step 3: Parsing the HTML Content
Now you have the raw HTML.
It’s time to make sense of it using Beautiful Soup. How to conduce content research with web scraping
-
Initialize Beautiful Soup:
from bs4 import BeautifulSoupif html_content:
soup = BeautifulSouphtml_content, 'html.parser' print"HTML content parsed successfully."
else:
print”No HTML content to parse.”
soup = None -
Finding Elements with
find
andfind_all
:findtag, attributes
: Finds the first occurrence of an element.find_alltag, attributes
: Finds all occurrences of elements matching the criteria.- Common selectors:
- By tag name:
soup.find_all'a'
all links - By class:
soup.find_all'div', class_='article-item'
- By ID:
soup.find'h1', id='main-title'
- By attributes:
soup.find_all'img', src=True
all images with a src attribute - By CSS selector:
soup.select'div.main-content p.article-text'
more powerful, like in CSS
- By tag name:
-
Example: Extracting News Article Links and Headlines from a Listing Page
if soup:
articles_data =
# Assuming each article summary is within a div with class ‘news-item’ Collect price data with web scrapingarticle_containers = soup.find_all’div’, class_=’news-item’
if not article_containers:
print”No article containers found with class ‘news-item’. Adjust selector.”
# Try a different common selector if the first fails
article_containers = soup.select’article.post’ # Example: another common patternfor article in article_containers:
title_tag = article.find’h2′, class_=’article-title’ # Find the title within each article container
link_tag = article.find’a’, class_=’article-link’ # Find the linkif title_tag and link_tag and link_tag.get’href’: Google play scraper
title = title_tag.get_textstrip=True
link = link_tag # Extract the href attribute
# Handle relative URLs
if not link.startswith’http’:
link = f”https://www.example-news.com{link}” # Prepend base URLarticles_data.append{‘title’: title, ‘url’: link}
else:printf”Skipping article due to missing title or link in: {article}”
if articles_data:
printf”Found {lenarticles_data} articles:”
for item in articles_data: # Print first 5 for preview Extract company reviews with web scrapingprintf”- {item} {item}”
else:
print”No articles extracted. Check your selectors and HTML structure.”
Step 4: Extracting Specific Data Points Deep Dive into an Article
Once you have the link to an individual article, you repeat the process to scrape its full content.
-
Example: Scraping a Full Article Page
def scrape_single_articlearticle_url:printf"Attempting to scrape: {article_url}" try: response = requests.getarticle_url, timeout=15 response.raise_for_status article_soup = BeautifulSoupresponse.text, 'html.parser' # Extract Headline headline = article_soup.find'h1', class_='article-headline' headline_text = headline.get_textstrip=True if headline else 'N/A' # Extract Author author_tag = article_soup.find'span', class_='author-name' author_name = author_tag.get_textstrip=True if author_tag else 'N/A' # Extract Publication Date date_tag = article_soup.find'time', class_='publish-date' publication_date = date_tag if date_tag and 'datetime' in date_tag.attrs else 'N/A' # Extract Article Body Text # Often, the main content is within a specific div or article tag. # You might need to experiment with selectors here. article_body_div = article_soup.find'div', class_='article-body' article_text = "" if article_body_div: # Find all paragraph tags within the body div paragraphs = article_body_div.find_all'p' for p in paragraphs: article_text += p.get_textstrip=True + "\n\n" printf"Could not find article body div for {article_url}" # Basic cleaning for body text article_text = article_text.strip if lenarticle_text < 100: # Simple check for very short or empty content printf"Warning: Article text for {article_url} seems too short or empty." return { 'headline': headline_text, 'author': author_name, 'publication_date': publication_date, 'full_text': article_text, 'url': article_url } except requests.exceptions.RequestException as e: printf"Error scraping article {article_url}: {e}" return None except Exception as e: printf"An unexpected error occurred while scraping {article_url}: {e}"
Example usage:
Let’s say articles_data from Step 3 has
if articles_data:
first_article_url = articles_data
detailed_article = scrape_single_articlefirst_article_url
if detailed_article:
print”\n— Detailed Article Data —“
for key, value in detailed_article.items:
printf”{key}: {value}…” # Print first 100 chars of text
Step 5: Handling Pagination and “Load More” Buttons
News websites often display content across multiple pages or load it dynamically.
- Pagination Numbered Pages:
- Identify the URL pattern for subsequent pages e.g.,
?page=2
,?p=3
. - Loop through these URLs until no more pagination links are found or a defined limit is reached.
-
base_url = 'https://www.example-news.com/archive?page=' page_num = 1 while True: current_page_url = f"{base_url}{page_num}" printf"Scraping page: {current_page_url}" response = requests.getcurrent_page_url soup = BeautifulSoupresponse.text, 'html.parser' # Extract articles from this page... # Check for next page link/button next_page_link = soup.find'a', class_='next-page-button' if not next_page_link: print"No more pages found." break page_num += 1 time.sleep2 # Be polite!
- Identify the URL pattern for subsequent pages e.g.,
- “Load More” Buttons Dynamic Loading – requires Selenium/Playwright:
-
Identify the “Load More” button’s selector ID, class, or XPath. Best scrapy alternative in web scraping
-
Use Selenium/Playwright to click the button repeatedly until no more content loads or the button disappears.
From selenium.webdriver.common.by import By
From selenium.common.exceptions import NoSuchElementException, ElementClickInterceptedException
Driver.get’https://www.dynamic-news-site.com/feed‘
try:load_more_button = WebDriverWaitdriver, 10.until Build a reddit image scraper without coding
EC.element_to_be_clickableBy.CSS_SELECTOR, ‘button.load-more’
load_more_button.click
print”Clicked ‘Load More’ button.”
time.sleep3 # Give time for new content to loadexcept NoSuchElementException, ElementClickInterceptedException:
print”No more ‘Load More’ button found or clickable.”
except Exception as e: Export google maps search results to excelprintf”An error occurred clicking ‘Load More’: {e}”
Now parse html_content with Beautiful Soup
-
This structured approach, from initial inspection to handling dynamic content, forms the backbone of effective and ethical web scraping.
Navigating Anti-Scraping Measures and Ethical Considerations
Modern websites employ various techniques to prevent automated scraping.
Understanding and respectfully navigating these is crucial for long-term success and ethical conduct.
Common Anti-Scraping Techniques
Websites use these measures to protect their servers, bandwidth, and proprietary content. Cragslist captcha bypass
- IP Blocking: The most common. If too many requests come from a single IP address in a short period, the website blocks that IP.
- Solution: Use proxy rotation residential proxies are harder to detect than data center proxies, or implement significant delays between requests.
- User-Agent String Checks: Websites might block requests that don’t have a legitimate browser User-Agent or that use common bot User-Agents e.g.,
Python-requests/2.25.1
.- Solution: Rotate User-Agents to mimic real browsers.
- CAPTCHAs: “Completely Automated Public Turing test to tell Computers and Humans Apart.” These pop up to verify you’re human.
- Solution: For occasional CAPTCHAs, manual solving is an option. For frequent ones, consider CAPTCHA solving services ethical debate, use with caution and only if absolutely necessary and permitted by ToS or reconsider scraping the site. Re-evaluate if an API exists.
- JavaScript Rendering: Data is loaded dynamically via JavaScript after the initial page load, meaning
requests
alone won’t get it.- Solution: Use headless browsers like
Selenium
orPlaywright
that execute JavaScript.
- Solution: Use headless browsers like
- Honeypot Traps: Invisible links or elements designed to trap bots. Clicking them can immediately ban your IP.
- Solution: Be careful with broad
find_all'a'
and always check link visibility before clicking. Focus on specific, visible elements.
- Solution: Be careful with broad
- Rate Limiting: Websites intentionally slow down responses or return errors if requests come too fast.
- Solution: Implement
time.sleep
delays between requests. Start with longer delays e.g., 5-10 seconds and gradually reduce them if the site tolerates it. Randomize delays for a more human-like pattern e.g.,time.sleeprandom.uniform2, 5
.
- Solution: Implement
- Session/Cookie Tracking: Websites track user sessions to identify repeated requests from the same “user.”
- Solution: Ensure your scraper handles cookies properly, or use a new session/cookie for each request, or rotate IP addresses frequently.
Best Practices for Ethical and Respectful Scraping
Being a good netizen is crucial.
Abuse can lead to legal issues, IP bans, and harm the reputation of ethical data collection.
- Always Check
robots.txt
First: This is your primary guideline. If it saysDisallow: /
, youDisallow
that part of the site. - Read Terms of Service ToS: Many ToS explicitly prohibit scraping. If scraping is forbidden, do not proceed. Seek alternative data sources like official APIs, commercial datasets, or public datasets. This is a non-negotiable point for responsible data practices.
- Implement Generous Delays
time.sleep
: This is the most important measure to prevent overloading the server. Aim for at least 2-5 seconds between requests, or even more for larger sites. Randomize delays to avoid a predictable pattern e.g.,time.sleeprandom.uniform1.5, 4.0
. - Mimic Human Behavior:
- Randomized delays: Avoid a fixed interval.
- Random User-Agents: Rotate through a list of common browser User-Agents.
- No Concurrent Requests Initially: Start with one request at a time. Only consider concurrent requests e.g., with
asyncio
orThreadPoolExecutor
after thoroughly testing the site’s tolerance and with very generous delays.
- Identify Yourself Optional but Recommended: Some scrapers add a custom
User-Agent
that includes contact information e.g.,User-Agent: MyNewsScraper/1.0 contact@example.com
. This allows site administrators to reach out if they have concerns instead of just blocking you. - Error Handling: Implement robust
try-except
blocks to handle network errors, timeouts, or changes in website structure gracefully. Don’t let your script crash. log errors and continue. - Incremental Scraping: Instead of re-scraping the entire site daily, only scrape new content or content that has changed. Use publication dates or unique article IDs to track what you’ve already collected.
- Avoid Overloading Servers: Your goal is to get data, not to launch a denial-of-service attack. If you notice slow responses or errors, back off and increase delays.
- Respect Data Privacy: Do not scrape personal identifiable information PII unless you have explicit consent and a legitimate, legal reason.
Processing and Storing Your Scraped Data
Raw HTML data is often messy.
The real value comes from cleaning, structuring, and storing it in a usable format.
Cleaning and Structuring the Data
Data cleaning is often the most time-consuming part of data processing. Best web scraping tools to grab leads
- Remove Unwanted Elements: Get rid of HTML tags, scripts, CSS, advertisements, and navigation elements that are not part of the core article content.
BeautifulSoup
can help:for tag in soup.find_all: tag.extract
- Text Cleaning:
- Whitespace: Remove extra spaces, newlines, and tabs
text.strip
,' '.jointext.split
. - Special Characters: Remove or replace non-standard characters, emojis, or encoding issues.
- URLs/Emails: Remove or anonymize if not needed.
- Common Phrases: Remove phrases like “Read More,” “Advertisement,” or “Subscribe Now” that might appear in the scraped text.
- Whitespace: Remove extra spaces, newlines, and tabs
- Standardize Formats:
- Dates: Convert various date formats e.g., “Jan 1, 2023”, “2023-01-01” into a consistent
YYYY-MM-DD
format using Python’sdatetime
module. - Numbers: Ensure numbers are parsed as integers or floats, not strings.
- Dates: Convert various date formats e.g., “Jan 1, 2023”, “2023-01-01” into a consistent
- Handling Missing Data: Decide how to handle fields that couldn’t be scraped e.g.,
None
, empty string, default value. Log these instances.
Choosing the Right Storage Format
The best storage format depends on your data volume, how you’ll use the data, and whether you need structured querying.
- CSV Comma Separated Values:
-
Pros: Simple, human-readable, easily opened in spreadsheets Excel, Google Sheets. Good for small to medium datasets.
-
Cons: Not ideal for complex, hierarchical data. Can be tricky with special characters and large text fields.
-
When to Use: Quick analysis, sharing with non-technical users, datasets up to a few hundred thousand rows.
import csvExample data:
All_articles_data = # Assume this list is populated with your scraped dictionaries
if all_articles_data:
keys = all_articles_data.keys # Get keys from the first dictionary to use as headers
csv_file_path = ‘news_articles.csv’with opencsv_file_path, ‘w’, newline=”, encoding=’utf-8′ as output_file:
dict_writer = csv.DictWriteroutput_file, fieldnames=keys
dict_writer.writeheaderdict_writer.writerowsall_articles_data
printf”Data successfully saved to {csv_file_path}”
printf”Error saving data to CSV: {e}”
print”No data to save to CSV.”
-
- JSON JavaScript Object Notation:
-
Pros: Excellent for semi-structured data, hierarchical data, and easy integration with web applications. Human-readable.
-
Cons: Can be less efficient for very large datasets than binary formats.
-
When to Use: Data exchange, API-like storage, when you need to store nested structures e.g., article with associated comments, or multiple image URLs.
import json
json_file_path = ‘news_articles.json’with openjson_file_path, 'w', encoding='utf-8' as f: json.dumpall_articles_data, f, ensure_ascii=False, indent=4 printf"Data successfully saved to {json_file_path}" printf"Error saving data to JSON: {e}"
-
- Databases SQL/NoSQL:
- SQL e.g., SQLite, PostgreSQL, MySQL:
- Pros: Relational structure, powerful querying with SQL, data integrity, scalable.
- Cons: Requires schema definition, can be more complex to set up.
- When to Use: Large datasets, need for complex querying and relationships e.g., articles, authors, categories as separate tables, long-term storage, data analysis.
- Example SQLite:
import sqlite3 db_file = 'news_database.db' conn = None conn = sqlite3.connectdb_file cursor = conn.cursor # Create table cursor.execute''' CREATE TABLE IF NOT EXISTS articles id INTEGER PRIMARY KEY AUTOINCREMENT, headline TEXT NOT NULL, author TEXT, publication_date TEXT, full_text TEXT, url TEXT UNIQUE NOT NULL ''' conn.commit # Insert data example for a single article # Ensure 'all_articles_data' is populated with dictionaries as expected for article in all_articles_data: try: cursor.execute''' INSERT OR IGNORE INTO articles headline, author, publication_date, full_text, url VALUES ?, ?, ?, ?, ? ''', article, article, article, article, article # Use INSERT OR IGNORE to prevent adding duplicate articles based on unique URL except sqlite3.IntegrityError: printf"Skipping duplicate article: {article.get'url', 'N/A'}" printf"Data successfully saved to SQLite database {db_file}" except sqlite3.Error as e: printf"SQLite error: {e}" finally: if conn: conn.close
- NoSQL e.g., MongoDB, Elasticsearch:
- Pros: Flexible schema document-oriented, good for semi-structured or rapidly changing data, highly scalable for large data volumes.
- Cons: Less mature querying than SQL for complex joins, eventual consistency models.
- When to Use: Very large, diverse datasets, when schema flexibility is more important than strict relational integrity, real-time data ingestion for search applications Elasticsearch.
- SQL e.g., SQLite, PostgreSQL, MySQL:
Data Integrity and Deduplication
Crucial for maintaining a clean and useful dataset.
- Identify Duplicates: Often, the article URL is the best unique identifier. Store a list of scraped URLs to avoid processing the same article multiple times.
- Checksums: For article content, you can calculate a hash e.g., MD5 or SHA256 of the cleaned article text. If the hash is the same, the content is likely a duplicate.
- Update Logic: If you’re running the scraper regularly, decide whether to:
- Append only: Add new articles.
- Upsert: Update existing articles if their content has changed, or insert new ones. This requires a unique key like URL and a “last updated” timestamp on the article.
By following these steps, you transform raw, chaotic web data into structured, clean, and valuable information ready for analysis or application.
Advanced Scraping Techniques for Complex Scenarios
Sometimes, standard requests
and Beautiful Soup
aren’t enough. Here’s when you need to bring out the big guns.
Handling JavaScript-Rendered Content with Selenium/Playwright
As discussed, when requests.geturl.text
returns an empty or incomplete page source, it means the content is loaded dynamically by JavaScript.
- How it Works: Selenium and Playwright launch a real web browser or a headless version and control it programmatically. This browser executes JavaScript, loads all content, and then you can access the fully rendered HTML.
- Key Operations:
- Browser Launch: Initialize a browser instance
webdriver.Chrome
. - Navigate:
driver.geturl
to open a URL. - Wait for Elements: Crucial! Pages load asynchronously. Use
WebDriverWait
andExpectedConditions
to wait until a specific element is visible, clickable, or present before trying to interact with it or scrape its content. This preventsNoSuchElementException
.from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # ... driver setup ... driver.get'https://example.com/dynamic-site' # Wait until an element with ID 'article-list' is present WebDriverWaitdriver, 15.until EC.presence_of_element_locatedBy.ID, 'article-list' print"Article list element is present." html = driver.page_source # Now parse 'html' with Beautiful Soup except TimeoutException: print"Timed out waiting for article list."
- Interactions:
driver.find_elementBy.ID, 'button_id'.click
to click buttons.driver.find_elementBy.NAME, 'input_name'.send_keys'text'
to fill forms.driver.execute_script'window.scrollTo0, document.body.scrollHeight.'
to scroll the page, triggering lazy loading.
- Get Page Source:
driver.page_source
after the page has fully loaded or interactions are complete. - Close Browser:
driver.quit
orbrowser.close
Playwright to release resources.
- Browser Launch: Initialize a browser instance
Utilizing Scrapy for Large-Scale Scraping
For complex, large-scale news aggregation, Scrapy
is a must. It’s not just a library. it’s a full-fledged framework.
-
Key Features:
- Asynchronous Request Handling: Efficiently manages thousands of concurrent requests without blocking.
- Pipelines: Process scraped items e.g., clean data, save to database after they’ve been extracted.
- Spiders: Define how to crawl a site and extract data from pages.
- Middlewares: Customize requests and responses e.g., for proxy rotation, User-Agent rotation, retries.
- Built-in Rate Limiting and Throttling: Helps you be polite to websites.
- Robust Error Handling: Designed for resilience in face of network errors or site changes.
-
When to Use Scrapy:
- Crawling hundreds or thousands of pages.
- Scraping multiple websites with similar structures.
- Need for persistent storage e.g., database integration.
- Handling complex navigation login, forms, multiple levels of links.
-
Basic Scrapy Workflow:
-
scrapy startproject my_news_scraper
-
Define a
Spider
inmy_news_scraper/spiders/
. -
Define
Items
to represent your structured data. -
Configure
Pipelines
to process and store items. -
Run with
scrapy crawl my_spider_name
.
Example Scrapy Spider simplified
import scrapy
class NewsSpiderscrapy.Spider:
name = ‘example_news_spider’
start_urls = # Replace with your news sitedef parseself, response: # This method parses the initial response # Extract headlines and links from the listing page for quote in response.css'div.quote': # Replace with your news article selectors yield { 'text': quote.css'span.text::text'.get, 'author': quote.css'small.author::text'.get, } # Follow pagination links next_page = response.css'li.next a::attrhref'.get if next_page is not None: yield response.follownext_page, self.parse # Recursively call parse for next page
-
Handling Anti-Bot Challenges with Proxies and CAPTCHA Services
These are often the last line of defense for a website, and ethically approaching them is key.
- Proxy Management:
- Proxy Pools: Maintain a list of proxies either free, but often unreliable, or paid, more robust ones.
- Rotation Logic: Rotate through proxies for each request or after a certain number of requests.
- Failure Detection: Implement logic to detect failed proxies and remove them from the pool.
- Types: Residential proxies IPs from real users, harder to detect vs. Data Center proxies IPs from data centers, easier to detect. Residential proxies are significantly more expensive but offer higher success rates against sophisticated anti-bot systems.
- CAPTCHA Solving:
- Manual Solving: If you’re scraping small amounts of data, you can pause your script and manually solve CAPTCHAs.
- CAPTCHA Solving Services e.g., 2Captcha, Anti-Captcha, DeathByCaptcha: These services use human workers to solve CAPTCHAs programmatically. You send the CAPTCHA image/data, they return the solution.
- Ethical Consideration: While these services exist, their use should be carefully considered against the website’s ToS. If a site is heavily protected by CAPTCHAs, it’s a strong signal they do not want automated access. Re-evaluate if a legitimate API or data source is available.
- Web Scraping APIs e.g., ScrapingBee, ScraperAPI, Bright Data:
- These services act as intermediaries. You send them the URL, and they handle the browser automation, proxy rotation, CAPTCHA solving often, and return the raw HTML or parsed data.
- Pros: Simplifies complex scraping significantly, abstracts away anti-bot challenges.
- Cons: Can be more expensive for high volumes.
- When to Use: When you need a quick solution, don’t want to manage infrastructure, or when target sites have very strong anti-bot measures. These often come with ethical usage policies and can sometimes provide a more compliant way to access public data.
Maintaining and Scaling Your Scraper
Websites change, and your scraping needs might grow.
Maintaining and scaling your scraper is essential for long-term data collection.
Dealing with Website Structure Changes
Websites are dynamic. What works today might break tomorrow.
- Regular Monitoring: Periodically check your target websites manually to observe any design or structural changes.
- Robust Selectors:
- Avoid overly specific or fragile selectors e.g.,
body > div:nth-child3 > section:nth-child2 > article:nth-child1 > div:nth-child1 > h1
. - Prefer using IDs if available as they are typically unique and stable.
- Use meaningful class names
article-title
,post-content
over generic ones. - Use XPath or CSS selectors that target elements based on their content or nearby elements, making them more resilient e.g.,
//h2
.
- Avoid overly specific or fragile selectors e.g.,
- Error Logging and Alerting:
- Implement detailed logging for failed requests, missing elements, or unexpected HTML structures.
- Set up alerts email, Slack notification when your scraper fails or encounters significant errors. This allows for quick intervention.
- Version Control: Store your scraper code in a version control system like Git. This allows you to track changes, revert to working versions, and collaborate.
- Test Cases: For critical data points, write small unit tests that assert the presence of specific elements or the correctness of extracted data. Run these tests after detecting website changes.
Scheduling and Automating Your Scraper
For continuous news monitoring, you need automation.
- Cron Jobs Linux/macOS / Task Scheduler Windows: For simple, recurring tasks on a single machine.
crontab -e
0 */6 * * * /usr/bin/python3 /path/to/your/scraper.py >> /path/to/log.log 2>&1
runs every 6 hours
- Cloud Functions AWS Lambda, Google Cloud Functions, Azure Functions:
- Pros: Serverless, pay-per-execution, scales automatically, less infrastructure management.
- Cons: Can be more complex to set up initially, might have execution time limits e.g., 15 minutes for Lambda that require breaking down large scraping jobs. Good for single-page scrapes or small batches.
- Dedicated Servers / VPS:
- Pros: Full control, can run long-running processes, dedicated resources.
- Cons: Requires server management, higher cost than serverless for intermittent use.
- Orchestration Tools e.g., Apache Airflow, Prefect:
- Pros: For complex workflows, dependency management, scheduling, monitoring, and retries.
- Cons: Significant learning curve, overkill for simple scraping tasks. Best for production-level data pipelines.
Scalability Considerations for Large Datasets
As your data volume grows, so do the challenges.
- Distributed Scraping: For truly massive scale, you might need to run multiple scraper instances across different machines or cloud instances.
- Queues RabbitMQ, Apache Kafka, Redis: Use message queues to distribute URLs to multiple worker scrapers and collect results.
- Load Balancers: Distribute incoming requests if building a scraping service or outgoing requests if using many proxies.
- Database Performance:
- Indexing: Ensure your database tables are properly indexed for faster querying, especially on frequently searched columns like
url
orpublication_date
. - Database Sharding/Clustering: For extreme scale, distribute data across multiple database servers.
- Indexing: Ensure your database tables are properly indexed for faster querying, especially on frequently searched columns like
- Cloud Storage S3, GCS: For storing large raw HTML files or large processed datasets that don’t fit well into a traditional database.
- Resource Management:
- Memory: Scraping can be memory-intensive, especially with headless browsers. Monitor memory usage.
- CPU: Parsing and JavaScript execution can be CPU-intensive.
- Network Bandwidth: Be mindful of your bandwidth consumption, especially if hosting on cloud platforms where egress traffic costs.
- Cost Optimization:
- Cloud Services: Optimize instance types, use spot instances where possible, leverage serverless functions for intermittent tasks to reduce costs.
- Proxy Costs: Paid proxies can be a significant expense. Balance performance with cost.
By proactively planning for maintenance and scalability, your news and article scraping efforts can evolve from a simple script into a robust and reliable data collection system.
Legal and Ethical Safeguards for Web Scraping
As Muslim professionals, our approach to any endeavor, including data collection, must be grounded in principles of honesty, integrity, and respect.
While the technical aspects of scraping are fascinating, the ethical and legal framework is paramount.
We must ensure our actions are permissible and beneficial, avoiding any form of injustice or deception.
This means adhering to the spirit of the law and the principles of good conduct.
Understanding Copyright and Data Ownership
The content you scrape, especially news articles, is almost certainly copyrighted.
- Copyright Law: Most creative works published online, including news articles, are automatically protected by copyright. This grants the creator exclusive rights to reproduce, distribute, display, and create derivative works from their content.
- Implications for Scraping:
- Personal Use vs. Commercial Use: Scraping for personal research or academic study might fall under “fair use” or “fair dealing” in some jurisdictions. However, redistributing, republishing, or commercializing scraped content without permission is a direct copyright infringement.
- Creating Derivative Works: Summarizing, rephrasing, or analyzing content can sometimes be considered a derivative work. If you intend to build a product or service based on scraped data, you must obtain explicit permission or licensing from the copyright holder.
- Data Aggregation vs. Content Republishing: Aggregating headlines and links with attribution linking back to the source is generally more acceptable than copying entire articles. The key is to add value through analysis or new presentation, not simply replicate the original content.
Ethical Considerations in Data Collection
Beyond legal boundaries, there are ethical lines drawn by common courtesy, good conscience, and Islamic principles.
- Resource Consumption: Flooding a website with requests can degrade its performance, cost the owner bandwidth, and even lead to a denial-of-service. This is akin to unjustly burdening another’s resources. Implement generous delays and crawl during off-peak hours.
- Privacy of Individuals: If you accidentally scrape any personal identifiable information PII like names, email addresses, or comments from individuals, you have a moral and often legal obligation to protect that data. Do not store, share, or process it without explicit consent and adherence to privacy regulations like GDPR or CCPA. News articles typically focus on public figures or events, but comments sections or user profiles can contain PII.
- Attribution and Transparency: If you use scraped data in a public project, always attribute the original source clearly and link back to them. Be transparent about your data collection methods if asked.
- Beneficial Use: Consider the ultimate purpose of your data collection. Is it for positive research, informed decision-making, or building something genuinely useful for the community? Avoid scraping for malicious purposes or to gain an unfair advantage.
Seeking Permission and Licensing
This is the most secure and ethical path for using data that you don’t own.
- Direct Contact: Reach out to the website owner or their legal/media department. Explain your project, what data you need, and how you intend to use it. Be prepared to pay for a license.
- Official APIs: As previously mentioned, these are designed for legitimate programmatic access. They often come with clear terms of use and rate limits, providing a compliant way to get data. Always prioritize official APIs when available.
- Commercial Data Providers: If direct licensing is too complex or costly, consider purchasing data from vendors who specialize in news aggregation and licensing. They’ve done the legal groundwork for you.
In essence, ethical web scraping, particularly for news and articles, is a practice that requires careful consideration of the rights of content creators, the resources of website hosts, and the privacy of individuals, all while striving for positive and permissible outcomes.
Real-World Applications and Case Studies of News Scraping
News and article data, once collected and processed, can be an incredibly rich source of information for various applications, driving insights and fostering innovation.
Market Research and Trend Analysis
Understanding public sentiment and emerging trends is critical for businesses and researchers.
- Identifying Emerging Topics: By scraping news articles over time and performing topic modeling e.g., Latent Dirichlet Allocation – LDA or keyword frequency analysis, you can spot new topics gaining traction in specific industries or geographies. For example, tracking the mentions of “AI ethics” or “quantum computing” in tech news could show rising public and industry interest.
- Competitor Monitoring: Scrape news about competitors to understand their press coverage, new product launches, partnerships, or executive changes. This provides competitive intelligence. Example: A tech company might scrape news about Apple, Google, and Microsoft to see their latest announcements and public reception.
- Consumer Sentiment Analysis: Use Natural Language Processing NLP on scraped news articles and social media mentions if permissible to gauge public sentiment towards brands, products, or political events. Are articles about a new electric vehicle positive, negative, or neutral? This can be quantified using sentiment scores. Statistics: A 2021 study by Brandwatch found that companies actively monitoring brand mentions and sentiment online saw a 20-25% improvement in customer satisfaction scores compared to those who didn’t.
- Economic Indicators: Analyze news mentions of economic terms e.g., “recession,” “inflation,” “job growth” to correlate with financial market movements or economic forecasts. This can be a supplementary data point for economists.
Academic Research and Data Journalism
News archives provide a vast corpus for linguistic, social, and historical analysis.
- Historical Analysis: Researchers can scrape decades of news archives to study how specific events were covered, how language evolved, or how public discourse changed over time. For instance, analyzing articles from the 1960s vs. today to see changes in reporting on environmental issues.
- Linguistic Studies: Analyze patterns in language use, journalistic style, or the prevalence of certain terminologies across different news outlets or time periods.
- Data Journalism: Journalists use scraped data to uncover trends, verify claims, or create interactive visualizations that tell compelling stories based on hard data. Example: ProPublica often uses data scraping to uncover systemic issues, such as analyzing government documents or public databases.
- Bias Detection: By scraping news from various outlets and applying NLP techniques, researchers can attempt to identify potential media bias by analyzing word choice, framing, and emphasis on certain topics. This helps in understanding the diverse narratives presented by different media.
Content Aggregation and Search Engines
Building platforms that summarize or make news more discoverable.
- Custom News Feeds: Create a personalized news aggregator that pulls articles from specific sources or on specific topics of interest to a user. This is what many news reader apps do, though often through APIs or RSS feeds.
- Niche Search Engines: Develop specialized search engines for a particular industry e.g., medical news, legal news that index articles from relevant sources.
- Content Curation: For content marketers or researchers, scraping can help identify top-performing articles or trending topics to inform their own content creation strategies. A survey by HubSpot indicated that 65% of businesses leverage content marketing strategies, and data-driven content is key.
AI and Machine Learning Applications
News data is a vital input for training and evaluating AI models.
- Training Language Models: Large datasets of news articles are used to train powerful Large Language Models LLMs like GPT-3/4. These models learn grammar, semantics, factual knowledge, and even stylistic nuances from vast text corpora.
- News Classification: Train machine learning models to automatically categorize news articles into predefined topics e.g., “Politics,” “Sports,” “Technology,” “Health”. This can be used for automating news feeds or improving search relevance.
- Event Extraction: Develop models that can identify and extract specific events e.g., “acquisition,” “election,” “natural disaster” and their associated entities who, what, when, where from news text.
- Recommendation Systems: Build systems that recommend news articles to users based on their reading history and preferences, using collaborative filtering or content-based recommendations on scraped article data.
These applications highlight the immense potential of news and article data when collected and utilized responsibly and ethically.
The key is to transform raw data into actionable intelligence, benefiting various fields from business to academia.
Frequently Asked Questions
How do I start scraping news data if I’m a beginner?
To start scraping news data as a beginner, begin with Python.
Focus on two core libraries: requests
for fetching web pages and Beautiful Soup
for parsing HTML.
Start with a simple, static website and try to extract basic elements like headlines and links.
Always inspect the website’s HTML structure using your browser’s developer tools to understand how data is organized before writing code.
Is it legal to scrape news and article data?
The legality of scraping news and article data is complex and depends on several factors, including the country, the website’s terms of service, and how the data is used. In many cases, it is not legal or ethical to scrape content without permission. Websites often explicitly forbid it in their Terms of Service. Always check the robots.txt
file and, more importantly, the website’s Terms of Service. It’s often best to seek official APIs or licensed datasets as alternatives.
What’s the difference between requests
and Selenium
for scraping?
requests
is used for fetching static web page content HTML, CSS, images directly from a server. It doesn’t execute JavaScript.
Selenium
, on the other hand, automates a real web browser or a headless one to interact with websites.
This allows it to render dynamic content loaded by JavaScript, click buttons, fill forms, and simulate human interaction, making it suitable for modern, interactive websites.
Can I scrape news from a website that has a “Load More” button?
Yes, you can scrape news from a website with a “Load More” button, but you’ll typically need a browser automation tool like Selenium
or Playwright
. These tools can programmatically click the “Load More” button, wait for new content to appear, and then scrape the newly loaded data.
Standard requests
won’t work in such scenarios because it doesn’t execute JavaScript.
How can I avoid getting my IP banned when scraping?
To avoid IP bans, implement rate limiting by adding delays between your requests e.g., time.sleeprandom.uniform2, 5
seconds. Rotate your User-Agent strings to mimic different browsers. For larger-scale operations, use proxy servers especially residential proxies to route your requests through different IP addresses. Always respect the website’s robots.txt
and Terms of Service.
What is robots.txt
and why is it important?
robots.txt
is a file that website owners use to communicate with web crawlers and scrapers, specifying which parts of their site should not be accessed.
It’s a voluntary guideline, not a legal mandate, but respecting it is a fundamental ethical practice in web scraping.
Ignoring it can lead to your IP being blocked and can be seen as an aggressive act against the website owner.
What data formats are best for storing scraped news?
The best data format depends on your needs. CSV is simple and good for tabular data, easily opened in spreadsheets. JSON is excellent for semi-structured or hierarchical data and is highly compatible with web applications. Databases SQL like SQLite/PostgreSQL or NoSQL like MongoDB are ideal for larger datasets, complex querying, and long-term storage.
How do I handle missing data during scraping?
When data is missing e.g., an author name or publication date, implement robust error handling in your parsing logic. You can assign default values like None
, an empty string ""
, or “N/A” for missing fields. Crucially, log any instances of missing data to understand if it’s a common issue or an anomaly, which might indicate a need to refine your selectors.
Can I scrape news articles for sentiment analysis?
Yes, scraping news articles for sentiment analysis is a common application.
After extracting the full text of articles, you would typically use Natural Language Processing NLP techniques and libraries like NLTK
, spaCy
, or pre-trained sentiment models to determine the emotional tone positive, negative, neutral of the content.
Remember to respect copyright and ToS if using the data for commercial purposes.
What are official APIs and why are they better than scraping?
Official APIs Application Programming Interfaces are standardized ways for developers to access a website’s data directly, provided by the website owner.
They are preferred over scraping because they offer structured, clean data, are designed for programmatic access, come with clear usage terms, and do not violate any implicit or explicit rules of the website.
They are the most ethical and reliable method of data acquisition.
How do I scrape news that requires login credentials?
Scraping news that requires login credentials is more complex and often ethically questionable unless you own the account or have explicit permission.
You’d need a browser automation tool like Selenium
or Playwright
to simulate the login process entering username/password and clicking submit. Storing credentials securely and respecting the site’s terms of service is paramount.
What is the typical success rate of web scraping for news?
The success rate of web scraping for news varies widely depending on the target website’s complexity, anti-scraping measures, and how frequently its structure changes.
Simple, static sites might have a high success rate 90%+ initially, while highly dynamic sites with strong anti-bot measures might have a lower or constantly fluctuating success rate, requiring significant maintenance.
Should I use Python for web scraping?
Yes, Python is highly recommended for web scraping.
Its extensive ecosystem of libraries like requests
, Beautiful Soup
, Selenium
, Scrapy
makes it versatile for various scraping tasks, from simple scripts to large-scale projects.
Its readability and strong community support also make it beginner-friendly.
How can I make my scraper more resilient to website changes?
To make your scraper more resilient, use robust HTML selectors e.g., IDs, unique class names, or XPath that targets based on content rather than fragile positional selectors. Implement comprehensive error handling and logging.
Regularly monitor target websites for structural changes.
Consider setting up alerts for scraper failures and use version control for your code.
Can I scrape images along with news articles?
Yes, you can scrape images along with news articles.
When parsing the HTML with Beautiful Soup, look for <img>
tags.
Extract the src
attribute, which contains the image URL.
You can then use the requests
library to download the image file from that URL and save it locally, ensuring you respect copyright and bandwidth.
What is the role of CSS selectors and XPath in scraping?
CSS selectors and XPath are powerful ways to locate specific elements within an HTML document. CSS selectors are familiar to anyone who’s styled web pages and are concise for many common selections e.g., div.article-content p
. XPath is a query language for XML/HTML documents, offering more flexibility and power for complex selections, especially when elements lack unique IDs or classes e.g., selecting an element based on its text content or its position relative to other elements.
How often should I run my news scraper?
The frequency of running your news scraper depends on the volatility of the news source and your specific needs.
For breaking news, you might run it every few minutes. For daily summaries, once a day might suffice.
For historical archives, a single run might be enough.
Always consider the website’s robots.txt
crawl-delay directives and be mindful of overloading their servers.
Is it necessary to use a proxy server for scraping news?
It is not always necessary for very small-scale, occasional scraping of static, non-protected sites.
However, for continuous, larger-scale scraping, or for sites with anti-bot measures, using proxy servers becomes crucial to avoid getting your IP banned and to maintain high success rates.
How can I make my scraper more efficient?
To make your scraper more efficient, use lxml
as Beautiful Soup’s parser for faster parsing.
Implement asynchronous requests e.g., with asyncio
and httpx
for concurrent fetching of pages with respectful delays. Optimize your parsing logic to avoid unnecessary iterations.
Store data incrementally to avoid re-scraping existing content. For very large-scale, use frameworks like Scrapy.
What are the ethical implications of scraping news content?
The ethical implications involve respecting the content creators’ intellectual property copyright, not overloading the website’s servers bandwidth/resource consumption, adhering to their stated policies robots.txt
, Terms of Service, and respecting user privacy if any personal information is inadvertently collected.
It’s about being a responsible digital citizen and ensuring your data collection aligns with principles of fairness and integrity.
Leave a Reply