Convert Figma logo to code with AI

ross logorequests-futures

Asynchronous Python HTTP Requests for Humans using Futures

2,103
152
2,103
0

Top Related Projects

Requests + Gevent = <3

15,061

Asynchronous HTTP client/server framework for asyncio and Python

13,151

A next generation HTTP client for Python. 🦋

52,086

A simple, yet elegant, HTTP library.

3,778

urllib3 is a user-friendly HTTP client library for Python

Quick Overview

Requests-Futures is an extension to the popular Python requests library that allows for asynchronous HTTP requests. It combines the simplicity of the requests API with the power of concurrent.futures to enable non-blocking network operations, improving performance for I/O-bound tasks.

Pros

  • Easy to use, maintaining the familiar requests API
  • Significantly improves performance for multiple HTTP requests
  • Seamless integration with existing requests-based code
  • Supports both ThreadPoolExecutor and ProcessPoolExecutor

Cons

  • Limited to Python's threading model, which may not be as efficient as async/await for very large numbers of concurrent requests
  • Doesn't support streaming responses
  • May require careful management of resources for large-scale applications
  • Less actively maintained compared to some alternative async HTTP libraries

Code Examples

  1. Basic usage:
from requests_futures.sessions import FuturesSession

session = FuturesSession()
future = session.get('https://api.github.com/user')
response = future.result()
print(response.json())
  1. Multiple concurrent requests:
urls = ['https://api.github.com/user', 'https://api.github.com/repos']
futures = [session.get(url) for url in urls]
responses = [future.result() for future in futures]
  1. Using a callback function:
def cb(sess, resp):
    print(f"Status: {resp.status_code}")

future = session.get('https://api.github.com/user', background_callback=cb)

Getting Started

To get started with requests-futures:

  1. Install the library:

    pip install requests-futures
    
  2. Import and use in your Python code:

    from requests_futures.sessions import FuturesSession
    
    session = FuturesSession()
    future = session.get('https://api.example.com')
    response = future.result()
    print(response.text)
    

This basic example demonstrates how to make an asynchronous GET request and retrieve the result. The FuturesSession class provides the same methods as the standard requests.Session, but returns Future objects instead of Response objects.

Competitor Comparisons

Requests + Gevent = <3

Pros of grequests

  • Built on gevent, which can handle a larger number of concurrent requests more efficiently
  • Provides a simpler API for making asynchronous requests
  • Maintains a closer resemblance to the original requests library syntax

Cons of grequests

  • Requires gevent, which can be challenging to install and configure on some systems
  • May have compatibility issues with certain libraries due to gevent's monkey-patching
  • Less actively maintained compared to requests-futures

Code Comparison

grequests:

import grequests

urls = ['http://example.com', 'http://example.org']
rs = (grequests.get(u) for u in urls)
responses = grequests.map(rs)

requests-futures:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
futures = [session.get(url) for url in urls]
responses = [f.result() for f in futures]

Both libraries aim to provide asynchronous HTTP requests functionality, building upon the popular requests library. grequests offers a more concise syntax and potentially better performance for a large number of concurrent requests, thanks to gevent. However, requests-futures has fewer dependencies and may be easier to integrate into existing projects without compatibility concerns. The choice between the two depends on specific project requirements, such as the scale of concurrent requests needed and the importance of maintaining compatibility with other libraries.

15,061

Asynchronous HTTP client/server framework for asyncio and Python

Pros of aiohttp

  • Built on asyncio, providing true asynchronous I/O
  • More feature-rich, supporting both client and server-side operations
  • Better performance for high-concurrency scenarios

Cons of aiohttp

  • Steeper learning curve, especially for those new to async programming
  • Requires more complex code structure compared to synchronous alternatives
  • May be overkill for simple use cases or small-scale applications

Code Comparison

requests-futures:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
future = session.get('https://example.com')
response = future.result()

aiohttp:

import aiohttp
import asyncio

async def fetch():
    async with aiohttp.ClientSession() as session:
        async with session.get('https://example.com') as response:
            return await response.text()

asyncio.run(fetch())

Summary

aiohttp is a more powerful and flexible library, offering true asynchronous I/O and better performance for high-concurrency scenarios. However, it comes with a steeper learning curve and more complex code structure. requests-futures provides a simpler interface for asynchronous HTTP requests, making it easier to use for basic tasks, but may not be as efficient for large-scale applications. The choice between the two depends on the specific requirements of your project and your familiarity with asynchronous programming concepts.

13,151

A next generation HTTP client for Python. 🦋

Pros of httpx

  • Supports both sync and async HTTP requests
  • Built-in support for HTTP/2
  • More modern and actively maintained

Cons of httpx

  • Larger dependency footprint
  • Steeper learning curve for those familiar with requests

Code comparison

requests-futures:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
future = session.get('https://example.com')
response = future.result()

httpx:

import httpx

with httpx.Client() as client:
    response = client.get('https://example.com')

Summary

httpx is a more modern and feature-rich library, offering both synchronous and asynchronous HTTP requests, as well as HTTP/2 support. It's actively maintained and follows current Python best practices. However, it has a larger dependency footprint and may require more learning for developers accustomed to requests.

requests-futures, on the other hand, is a simpler extension of the popular requests library, focusing primarily on asynchronous requests. It's easier to pick up for those already familiar with requests, but lacks some of the advanced features and ongoing development of httpx.

Choose httpx for more comprehensive HTTP capabilities and future-proofing, or stick with requests-futures for a simpler, requests-like experience with asynchronous support.

52,086

A simple, yet elegant, HTTP library.

Pros of Requests

  • Widely adopted and well-maintained library with extensive documentation
  • Supports synchronous HTTP requests with a simple, intuitive API
  • Offers built-in features like session handling, authentication, and cookie persistence

Cons of Requests

  • Lacks native support for asynchronous operations
  • May not be suitable for high-performance scenarios requiring concurrent requests

Code Comparison

Requests:

import requests

response = requests.get('https://api.example.com')
print(response.json())

Requests-Futures:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
future = session.get('https://api.example.com')
response = future.result()
print(response.json())

Key Differences

  • Requests-Futures is built on top of Requests, adding asynchronous capabilities
  • Requests-Futures allows for concurrent requests, potentially improving performance for multiple API calls
  • Requests-Futures requires additional setup and slightly different usage compared to standard Requests

Use Cases

  • Requests: General-purpose HTTP requests, simple API interactions
  • Requests-Futures: Scenarios requiring multiple concurrent requests, such as batch processing or parallel API calls
3,778

urllib3 is a user-friendly HTTP client library for Python

Pros of urllib3

  • More comprehensive HTTP client library with advanced features
  • Direct support for connection pooling and thread safety
  • Widely used and battle-tested in production environments

Cons of urllib3

  • Slightly more complex API compared to requests-futures
  • Requires more setup for asynchronous operations

Code Comparison

urllib3:

import urllib3

http = urllib3.PoolManager()
response = http.request('GET', 'https://api.example.com')
print(response.data)

requests-futures:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
future = session.get('https://api.example.com')
response = future.result()
print(response.text)

Key Differences

  • urllib3 provides lower-level control over HTTP operations
  • requests-futures focuses on asynchronous requests using futures
  • urllib3 requires manual connection pooling setup, while requests-futures handles it automatically
  • urllib3 offers more flexibility for advanced use cases, while requests-futures provides a simpler API for async operations

Use Cases

  • Choose urllib3 for more control over HTTP operations and advanced features
  • Opt for requests-futures when simplicity and easy async functionality are priorities

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Asynchronous Python HTTP Requests for Humans

.. image:: https://travis-ci.org/ross/requests-futures.svg?branch=master :target: https://travis-ci.org/ross/requests-futures

Small add-on for the python requests_ http library. Makes use of python 3.2's concurrent.futures_ or the backport_ for prior versions of python.

The additional API and changes are minimal and strives to avoid surprises.

The following synchronous code:

.. code-block:: python

from requests import Session

session = Session()
# first requests starts and blocks until finished
response_one = session.get('http://httpbin.org/get')
# second request starts once first is finished
response_two = session.get('http://httpbin.org/get?foo=bar')
# both requests are complete
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

Can be translated to make use of futures, and thus be asynchronous by creating a FuturesSession and catching the returned Future in place of Response. The Response can be retrieved by calling the result method on the Future:

.. code-block:: python

from requests_futures.sessions import FuturesSession

session = FuturesSession()
# first request is started in background
future_one = session.get('http://httpbin.org/get')
# second requests is started immediately
future_two = session.get('http://httpbin.org/get?foo=bar')
# wait for the first request to complete, if it hasn't already
response_one = future_one.result()
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
# wait for the second request to complete, if it hasn't already
response_two = future_two.result()
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

By default a ThreadPoolExecutor is created with 8 workers. If you would like to adjust that value or share a executor across multiple sessions you can provide one to the FuturesSession constructor.

.. code-block:: python

from concurrent.futures import ThreadPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ThreadPoolExecutor(max_workers=10))
# ...

As a shortcut in case of just increasing workers number you can pass max_workers straight to the FuturesSession constructor:

.. code-block:: python

from requests_futures.sessions import FuturesSession
session = FuturesSession(max_workers=10)

FutureSession will use an existing session object if supplied:

.. code-block:: python

from requests import session
from requests_futures.sessions import FuturesSession
my_session = session()
future_session = FuturesSession(session=my_session)

That's it. The api of requests.Session is preserved without any modifications beyond returning a Future rather than Response. As with all futures exceptions are shifted (thrown) to the future.result() call so try/except blocks should be moved there.

Tying extra information to the request/response

The most common piece of information needed is the URL of the request. This can be accessed without any extra steps using the request property of the response object.

.. code-block:: python

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[session.get(f'http://httpbin.org/get?{i}') for i in range(3)]

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'url': resp.request.url,
        'content': resp.json(),
    })

There are situations in which you may want to tie additional information to a request/response. There are a number of ways to go about this, the simplest is to attach additional information to the future object itself.

.. code-block:: python

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[]
for i in range(3):
    future = session.get('http://httpbin.org/get')
    future.i = i
    futures.append(future)

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'i': future.i,
        'content': resp.json(),
    })

Canceling queued requests (a.k.a cleaning up after yourself)

If you know that you won't be needing any additional responses from futures that haven't yet resolved, it's a good idea to cancel those requests. You can do this by using the session as a context manager:

.. code-block:: python

from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
    future = session.get('https://httpbin.org/get')
    future2 = session.get('https://httpbin.org/delay/10')
    future3 = session.get('https://httpbin.org/delay/10')
    response = future.result()

In this example, the second or third request will be skipped, saving time and resources that would otherwise be wasted.

Iterating over a list of requests responses

Without preserving the requests order:

.. code-block:: python

from concurrent.futures import as_completed
from requests_futures.sessions import FuturesSession
with FuturesSession() as session:
    futures = [session.get('https://httpbin.org/delay/{}'.format(i % 3)) for i in range(10)]
    for future in as_completed(futures):
        resp = future.result()
        print(resp.json()['url'])

Working in the Background

Additional processing can be done in the background using requests's hooks_ functionality. This can be useful for shifting work out of the foreground, for a simple example take json parsing.

.. code-block:: python

from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

future = session.get('http://httpbin.org/get', hooks={
    'response': response_hook,
})
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)

Hooks can also be applied to the session.

.. code-block:: python

from pprint import pprint
from requests_futures.sessions import FuturesSession

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

session = FuturesSession()
session.hooks['response'] = response_hook

future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)   pprint(response.data)

A more advanced example that adds an elapsed property to all requests.

.. code-block:: python

from pprint import pprint
from requests_futures.sessions import FuturesSession
from time import time


class ElapsedFuturesSession(FuturesSession):

    def request(self, method, url, hooks=None, *args, **kwargs):
        start = time()
        if hooks is None:
            hooks = {}

        def timing(r, *args, **kwargs):
            r.elapsed = time() - start

        try:
            if isinstance(hooks['response'], (list, tuple)):
                # needs to be first so we don't time other hooks execution
                hooks['response'].insert(0, timing)
            else:
                hooks['response'] = [timing, hooks['response']]
        except KeyError:
            hooks['response'] = timing

        return super(ElapsedFuturesSession, self) \
            .request(method, url, hooks=hooks, *args, **kwargs)



session = ElapsedFuturesSession()
future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
print('response elapsed {0}'.format(response.elapsed))

Using ProcessPoolExecutor

Similarly to ThreadPoolExecutor, it is possible to use an instance of ProcessPoolExecutor. As the name suggest, the requests will be executed concurrently in separate processes rather than threads.

.. code-block:: python

from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before

.. HINT:: Using the ProcessPoolExecutor is useful, in cases where memory usage per request is very high (large response) and cycling the interpreter is required to release memory back to OS.

A base requirement of using ProcessPoolExecutor is that the Session.request, FutureSession all be pickle-able.

This means that only Python 3.5 is fully supported, while Python versions 3.4 and above REQUIRE an existing requests.Session instance to be passed when initializing FutureSession. Python 2.X and < 3.4 are currently not supported.

.. code-block:: python

# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
# ... use as before

In case pickling fails, an exception is raised pointing to this documentation.

.. code-block:: python

# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor

.. IMPORTANT::

  • Python >= 3.4 required
  • A session instance is required when using Python < 3.5
  • If sub-classing FuturesSession it must be importable (module global)

Installation

pip install requests-futures

.. _requests: https://github.com/kennethreitz/requests .. _concurrent.futures: http://docs.python.org/dev/library/concurrent.futures.html .. _backport: https://pypi.python.org/pypi/futures .. _hooks: http://docs.python-requests.org/en/master/user/advanced/#event-hooks