I am working on a facial detector script. I have managed to create a dataset by capturing images from a webcam, saving them to a local directory and storing the data on my local database. but when I try to run the main app to recognize the faces and display them to me, I am getting the following error:
runfile('C:/Users/JeanCamargo/Google Drive/python/college/face recognition/face recognition.py', wdir='C:/Users/JeanCamargo/Google Drive/python/college/face recognition')
Reloaded modules: dbconnect
Traceback (most recent call last):
File "C:\Users\JeanCamargo\Google Drive\python\college\face recognition\face recognition.py", line 27, in <module>
recognizer.read(r"trainner\trainningData.yml")
error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-6lylwdcz\opencv\modules\core\src\persistence.cpp:2089: error: (-215:Assertion failed) isMap() in function 'cv::FileNode::operator []'
Any ideas on what's causing this? the file I am running goes as following.
import cv2
import sys
import numpy as np
import pickle
from PIL import Image
from dbconnect import mySQL
import os
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read(r"trainner\trainningData.yml")
cascPath = r"Classifiers\haarcascade_frontalface_alt.XML"
faceCascade = cv2.CascadeClassifier(cascPath)
#Id = 0
path = 'dataSet'
def getProfile(Id):
query = "SELECT * FROM people WHERE ID ="+ Id
cursor = query.fetchall()
mySQL.close()
profile = None
for row in cursor:
profile = row
return profile
video_capture = cv2.VideoCapture(1)
font = cv2.cv.InitFont(cv2.cv.CV_FONT_HERSHEY_SIMPLEX, 1,.5,0,2,1)
profiles={}
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
if ret==False:
continue
frame = cv2.flip(frame, 1) # Flip image
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE
)
for (x, y, w, h) in faces:
Id, conf = recognizer.predict(gray[y:y+h,x:x+w])
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
profile = getProfile(id)
if (profile !=None):
cv2.cv.PutText(cv2.cv.fromarray(frame),profile[1],(x,y+h+30),255)
cv2.cv.PutText(cv2.cv.fromarray(frame),profile[2],(x,y+h+60),255)
cv2.cv.PutText(cv2.cv.fromarray(frame),profile[3],(x,y+h+90),255)
cv2.cv.PutText(cv2.cv.fromarray(frame),profile[4],(x,y+h+120),255)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I eventually found an answer to this.
this is happening due to:
cv2.face.LBPHFaceRecognizer_create()
This is the correct invocation for OpenCV. but most likely, you do not have the face submodule, because your cv2.pyd was built without opencv_contrib
there's a couple of options:
rebuild from src with opencv_contrib, you need a c++ compiler and CMake for this.
fall back to opencv2.4 and use
cv2.createLBPHFaceRecognizer()
once this is done and train the data again it will work ok