How To Decrease Fps In Python From Camera
Have yous ever worked with a video file via OpenCV's cv2.VideoCapture
function and found that reading frames only felt boring and sluggish?
I've been there — and I know exactly how it feels.
Your unabridged video processing pipeline crawls along, unable to process more than one or two frames per 2d — fifty-fifty though you aren't doing any type of computationally expensive image processing operations.
Why is that?
Why, at times, does it seem like an eternity for cv2.VideoCapture
and the associated .read
method to poll some other frame from your video file?
The answer is almost always video compression and frame decoding.
Depending on your video file type, the codecs you take installed, and not to mention, the physical hardware of your machine, much of your video processing pipeline can actually be consumed past reading and decoding the next frame in the video file.
That's just computationally wasteful — and there is a better manner.
In the remainder of today's blog mail, I'll demonstrate how to use threading and a queue data structure to improve your video file FPS rate by over 52%!
Looking for the source code to this mail service?
Leap Correct To The Downloads SectionFaster video file FPS with cv2.VideoCapture and OpenCV
When working with video files and OpenCV yous are likely using the cv2.VideoCapture
function.
First, y'all instantiate your cv2.VideoCapture
object past passing in the path to your input video file.
Then you showtime a loop, calling the .read
method of cv2.VideoCapture
to poll the next frame from the video file and then you can procedure information technology in your pipeline.
The trouble (and the reason why this method tin can feel slow and sluggish) is that you're both reading and decoding the frame in your main processing thread!
As I've mentioned in previous posts, the .read
method is a blocking performance — the master thread of your Python + OpenCV awarding is entirely blocked (i.e., stalled) until the frame is read from the video file, decoded, and returned to the calling function.
Past moving these blocking I/O operations to a separate thread and maintaining a queue of decoded frames we can really improve our FPS processing charge per unit by over 52%!
This increase in frame processing rate (and therefore our overall video processing pipeline) comes from dramatically reducing latency — we don't accept to await for the .read
method to stop reading and decoding a frame; instead, there is always a pre-decoded frame fix for us to process.
To achieve this latency subtract our goal will be to movement the reading and decoding of video file frames to an entirely separate thread of the plan, freeing up our main thread to handle the actual epitome processing.
Simply before nosotros can appreciate the faster, threaded method to video frame processing, we first demand to set a benchmark/baseline with the slower, non-threaded version.
The slow, naive method to reading video frames with OpenCV
The goal of this section is to obtain a baseline on our video frame processing throughput rate using OpenCV and Python.
To get-go, open upwardly a new file, name it read_frames_slow.py
, and insert the post-obit code:
# import the necessary packages from imutils.video import FPS import numpy as np import argparse import imutils import cv2 # construct the statement parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-5", "--video", required=True, help="path to input video file") args = vars(ap.parse_args()) # open a pointer to the video stream and start the FPS timer stream = cv2.VideoCapture(args["video"]) fps = FPS().start()
Lines 2-6 import our required Python packages. We'll be using my imutils library, a series of convenience functions to make image and video processing operations easier with OpenCV and Python.
If you don't already have imutils
installed or if y'all are using a previous version, you can install/upgrade imutils
by using the following command:
$ pip install --upgrade imutils
Lines 9-12 and so parse our control line arguments. We only need a single switch for this script, --video
, which is the path to our input video file.
Line 15 opens a arrow to the --video
file using the cv2.VideoCapture
class while Line 16 starts a timer that we can use to measure FPS, or more specifically, the throughput rate of our video processing pipeline.
With cv2.VideoCapture
instantiated, nosotros can get-go reading frames from the video file and processing them one-past-one:
# loop over frames from the video file stream while True: # catch the frame from the threaded video file stream (grabbed, frame) = stream.read() # if the frame was not grabbed, so we have reached the end # of the stream if non grabbed: break # resize the frame and catechumen it to grayscale (while still # retaining 3 channels) frame = imutils.resize(frame, width=450) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame = np.dstack([frame, frame, frame]) # display a piece of text to the frame (so we can benchmark # fairly against the fast method) cv2.putText(frame, "Slow Method", (ten, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), ii) # show the frame and update the FPS counter cv2.imshow("Frame", frame) cv2.waitKey(1) fps.update()
On Line 19 nosotros start looping over the frames of our video file.
A call to the .read
method on Line 21 returns a ii-tuple containing:
-
grabbed
: A boolean indicating if the frame was successfully read or not. -
frame
: The actual video frame itself.
If grabbed
is Faux
and then we know nosotros have reached the finish of the video file and can pause from the loop (Lines 25 and 26).
Otherwise, we perform some basic epitome processing tasks, including:
- Resizing the frame to take a width of 450 pixels.
- Converting the frame to grayscale.
- Drawing the text on the frame via the
cv2.putText
method. We practice this considering nosotros'll exist using thecv2.putText
function to display our queue size in the fast, threaded example below and desire to have a off-white, comparable pipeline.
Lines 40-42 display the frame to our screen and update our FPS counter.
The final lawmaking block handles computing the approximate FPS/frame rate throughput of our pipeline, releasing the video stream pointer, and closing any open up windows:
# stop the timer and brandish FPS information fps.finish() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) impress("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup stream.release() cv2.destroyAllWindows()
To execute this script, be sure to download the source code + example video to this web log mail using the "Downloads" section at the lesser of the tutorial.
For this example nosotros'll be using the first 31 seconds of the Jurassic Park trailer (the .mp4 file is included in the code download):
Permit's go ahead and obtain a baseline for frame processing throughput on this example video:
$ python read_frames_slow.py --video videos/jurassic_park_intro.mp4
Every bit you tin see, processing each individual frame of the 31 second video clip takes approximately 47 seconds with a FPS processing rate of twenty.21.
These results imply that information technology'south really taking longer to read and decode the private frames than the bodily length of the video prune!
To encounter how we tin can speedup our frame processing throughput, have a await at the technique I describe in the next section.
Using threading to buffer frames with OpenCV
To improve the FPS processing rate of frames read from video files with OpenCV we are going to utilize threading and the queue data structure:
Since the .read
method of cv2.VideoCapture
is a blocking I/O operation we tin obtain a significant speedup simply by creating a separate thread from our main Python script that is solely responsible for reading frames from the video file and maintaining a queue.
Since Python's Queue data structure is thread safe, much of the hard work is washed for us already — nosotros just need to put all the pieces together.
I've already implemented the FileVideoStream class in imutils merely we're going to review the code and then you can sympathise what'due south going on nether the hood:
# import the necessary packages from threading import Thread import sys import cv2 # import the Queue class from Python 3 if sys.version_info >= (three, 0): from queue import Queue # otherwise, import the Queue class for Python 2.7 else: from Queue import Queue
Lines ii-4 handle importing our required Python packages. The Thread
form is used to create and outset threads in the Python programming language.
We need to take special care when importing the Queue
data structure as the proper name of the queue parcel is different based on which Python version you are using (Lines 7-12).
We can now define the constructor to FileVideoStream
:
form FileVideoStream: def __init__(cocky, path, queueSize=128): # initialize the file video stream along with the boolean # used to indicate if the thread should exist stopped or non cocky.stream = cv2.VideoCapture(path) cocky.stopped = False # initialize the queue used to shop frames read from # the video file self.Q = Queue(maxsize=queueSize)
Our constructor takes a single required argument followed past an optional ane:
-
path
: The path to our input video file. -
queueSize
: The maximum number of frames to shop in the queue. This value defaults to 128 frames, but you depending on (1) the frame dimensions of your video and (2) the corporeality of memory you tin can spare, you may desire to enhance/lower this value.
Line 18 instantiates our cv2.VideoCapture
object by passing in the video path
.
We and so initialize a boolean to indicate if the threading process should be stopped (Line 19) along with our actual Queue
data structure (Line 23).
To kick off the thread, nosotros'll next ascertain the start
method:
def commencement(cocky): # start a thread to read frames from the file video stream t = Thread(target=cocky.update, args=()) t.daemon = True t.first() render self
This method merely starts a thread separate from the main thread. This thread will call the .update
method (which we'll ascertain in the next code block).
The update
method is responsible for reading and decoding frames from the video file, along with maintaining the actual queue data structure:
def update(cocky): # go along looping infinitely while True: # if the thread indicator variable is gear up, stop the # thread if self.stopped: render # otherwise, ensure the queue has room in it if not self.Q.total(): # read the next frame from the file (grabbed, frame) = self.stream.read() # if the `grabbed` boolean is `False`, then nosotros take # reached the end of the video file if not grabbed: self.terminate() return # add together the frame to the queue self.Q.put(frame)
On the surface, this code is very similar to our instance in the slow, naive method detailed higher up.
The key takeaway here is that this code is actually running in a split up thread — this is where our actual FPS processing rate increase comes from.
On Line 34 we start looping over the frames in the video file.
If the stopped
indicator is set, we exit the thread (Lines 37 and 38).
If our queue is not total we read the next frame from the video stream, check to see if we accept reached the end of the video file, and so update the queue (Lines 41-52).
The read
method will handle returning the next frame in the queue:
def read(self): # return next frame in the queue return self.Q.get()
Nosotros'll create a convenience part named more
that volition return Truthful
if there are nevertheless more frames in the queue (and Fake
otherwise):
def more(cocky): # return True if at that place are however frames in the queue return self.Q.qsize() > 0
And finally, the end
method will be chosen if we want to finish the thread prematurely (i.eastward., before we accept reached the cease of the video file):
def cease(self): # indicate that the thread should be stopped self.stopped = True
The faster, threaded method to reading video frames with OpenCV
Now that we have defined our FileVideoStream
class we can put all the pieces together and relish a faster, threaded video file read with OpenCV.
Open a new file, proper noun information technology read_frames_fast.py
, and insert the following code:
# import the necessary packages from imutils.video import FileVideoStream from imutils.video import FPS import numpy equally np import argparse import imutils import fourth dimension import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-five", "--video", required=Truthful, help="path to input video file") args = vars(ap.parse_args()) # kickoff the file video stream thread and permit the buffer to # beginning to fill up print("[INFO] starting video file thread...") fvs = FileVideoStream(args["video"]).start() time.sleep(1.0) # get-go the FPS timer fps = FPS().start()
Lines 2-8 import our required Python packages. Find how nosotros are using the FileVideoStream
class from the imutils
library to facilitate faster frame reads with OpenCV.
Lines 11-14 parse our command line arguments. Simply similar the previous instance, we only need a single switch, --video
, the path to our input video file.
We then instantiate the FileVideoStream
object and start the frame reading thread (Line 19).
Line 23 so starts the FPS timer.
Our next department handles reading frames from the FileVideoStream
, processing them, and displaying them to our screen:
# loop over frames from the video file stream while fvs.more(): # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale (while still retaining three # channels) frame = fvs.read() frame = imutils.resize(frame, width=450) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame = np.dstack([frame, frame, frame]) # display the size of the queue on the frame cv2.putText(frame, "Queue Size: {}".format(fvs.Q.qsize()), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.vi, (0, 255, 0), 2) # prove the frame and update the FPS counter cv2.imshow("Frame", frame) cv2.waitKey(ane) fps.update()
Nosotros start a while
loop on Line 26 that volition continue grabbing frames from the FileVideoStream
queue until the queue is empty.
For each of these frames we'll apply the same image processing operations, including: resizing, conversion to grayscale, and displaying text on the frame (in this instance, our text will be the number of frames in the queue).
The candy frame is displayed to our screen on Lines 40-42.
The last code cake computes our FPS throughput rate and performs a bit of cleanup:
# stop the timer and display FPS data fps.stop() print("[INFO] elasped fourth dimension: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a chip of cleanup cv2.destroyAllWindows() fvs.stop()
To see the results of the read_frames_fast.py
script, make sure yous download the source code + case video using the "Downloads" section at the bottom of this tutorial.
From there, execute the following control:
$ python read_frames_fast.py --video videos/jurassic_park_intro.mp4
As we tin run into from the results nosotros were able to process the entire 31 second video clip in 31.09 seconds — that's an comeback of34% from the slow, naive method!
The actual frame throughput processing charge per unit is much faster,clocking in atxxx.75 frames per second, an improvement of52.15%.
Threading can dramatically improve the speed of your video processing pipeline — use information technology whenever you can.
What about built-in webcams, USB cameras, and the Raspberry Pi? What exercise I exercise so?
This mail service has focused on using threading to meliorate the frame processing rate ofvideo files.
If you lot're instead interested in speeding upwards the FPS of your congenital-in webcam, USB camera, or Raspberry Pi camera module, delight refer to these weblog posts:
- Increasing webcam FPS with Python and OpenCV
- Increasing Raspberry Pi FPS with Python and OpenCV
- Unifying picamera and cv2.VideoCapture into a unmarried class with OpenCV
What's next? I recommend PyImageSearch University.
Course data:
35+ full classes • 39h 44m video • Last updated: April 2022
★★★★★ 4.84 (128 Ratings) • 13,800+ Students Enrolled
I strongly believe that if you had the right instructor you could master computer vision and deep learning.
Exercise you recall learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in calculator scientific discipline?
That's not the case.
All you need to master computer vision and deep learning is for someone to explain things to you lot in simple, intuitive terms. And that'southward exactly what I exercise. My mission is to modify didactics and how complex Bogus Intelligence topics are taught.
If you're serious well-nigh learning computer vision, your next stop should exist PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV form online today. Here you'll learn how to successfully and confidently use computer vision to your piece of work, inquiry, and projects. Bring together me in computer vision mastery.
Inside PyImageSearch University you lot'll discover:
- ✓ 35+ courses on essential calculator vision, deep learning, and OpenCV topics
- ✓ 35+ Certificates of Completion
- ✓ 39+ hours of on-need video
- ✓ Make new courses released regularly , ensuring you can keep up with state-of-the-fine art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 450+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Click here to join PyImageSearch University
Summary
In today's tutorial I demonstrated how to use threading and a queue data structure to amend the FPS throughput rate of your video processing pipeline.
Past placing the call to .read
of a cv2.VideoCapture
object in a threaddivide from the primary Python script we can avoid blocking I/O operations that would otherwise dramatically dull down our pipeline.
Finally, I provided an example comparingthreadingwithno threading. The results evidence that by using threading we can amend our processing pipeline by up to 52%.
Still, keep in mind that the more steps (i.eastward., function calls) yous make inside your while
loop, the more computation needs to be done — therefore, your actual frames per 2d rate will drop, just you'll however exist processing faster than the non-threaded version.
To exist notified when future weblog posts are published, be sure to enter your email address in the form below!
Download the Source Code and FREE 17-page Resource Guide
Enter your email address beneath to get a .nil of the code and a FREE 17-folio Resources Guide on Figurer Vision, OpenCV, and Deep Learning. Within yous'll detect my mitt-picked tutorials, books, courses, and libraries to help you lot master CV and DL!
Source: https://pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/
Posted by: cookboun1947.blogspot.com
0 Response to "How To Decrease Fps In Python From Camera"
Post a Comment