# Facial Recognition Essay

Superior Essays
Facial recognition has become one of the most widely used technologies utilized both in the civilian and government/military worlds. Facebook, airports, banks, and many other businesses incorporate this technology for added security, added features, and convenience measures. Even though Bledsoe, Chan, and Bisson began work on this in the 1960’s, many of the advancements began in the 1990s. Most algorithms are grouped into one of two categories, template-based or geometric-based. (Marques) Although this technology is rapidly expanding, it still has far many more possibilities such as 3-D facial recognition. The following paper will attempt to briefly highlight and discuss the main ideas of the most popular algorithms, as well as examine …show more content…
Facial recognition and detection algorithms work together to improve the accuracy of today’s facial recognition software. The job of today’s computer scientists and mathematicians is to mimic the human eyes and brain’s ability to detect and recognize human faces through an attempt to replicate this complicated process using a series of highly sophisticated algorithms. Infants learn these techniques shortly after birth, and today’s programmers are only beginning to scratch the surface of the possibilities created with this …show more content…
P_I= I ̂/I^* where I ̂ equals correctly identified and I* represents the size of the probe set.
The final comparisons are made by using all pairs for (PI,PJ) and for all of the ROC values which were measured by letting the ROC = R_k/P. As with the algorithms already discussed, the Linear Discriminant Analysis (LDA) model begins with Gaussian data. LDA makes use of two values, the mean and the variance, which are produced for each class. Classes can be thought of as facial features such as chins, noses, eyes, ears, hair lines, hair styles, etc. Four actual pieces of data are calculated during before the final algorithm can be used. The mean value is used to find the muk value.
Muk = 1/nk * sum(x). After this, the variance is found for all classes by using this mean value.
Sigma2 = 1 / (n-k) * sum((x-mu)2). There are two additional steps required before making a final prediction using the LDA method, this paper will only look at the final function.
Dk(x) = x * muk/〖sigma〗^2 -(〖muk〗^2/〖2sigma〗^2 +ln⁡(Plk)) where “Dk(x) is the discriminate function for class k given input x, the muk, sigma2 and Plk are all estimated from your data. (Browniee). Once this discriminant is calculated, this data then can be further manipulated to assist in the facial recognition

## Related Documents

• Improved Essays

Suppose the system contains n_od nodes that will be divided into number of Rendezvous geographic Zones (〖RG〗_Z) based on their geographic area. Each zone will have number of nodes n_od ranges from max┬i⁡celi to min┬i⁡flo depend on their area load. This will achieves O(log(AVG [max┬i⁡celi , min┬i⁡flo] ) running time. These zones are categorized as follows and so on. Then instead of hashing all the nodes in the system, we start hashing each node in a specific zone by using spooky hash function illustrated in next subsection, then based on the hashes obtained the nodes are organized in levels (level 1 to 3 based on number of nodes) and the node with the highest weighted hash in the upper level will be assigned as coordinator of this level 〖C(j)〗_(m.x)^n, where n is the zone number, m is the level the coordinator in and x is the coordinator ID.…

• 2049 Words
• 9 Pages
Improved Essays
• Decent Essays

(1 point) Correct answer Delimited 9) A function that removes extra blank spaces from a string of characters. (1 point) Correct answer TRIM 10) An Excel feature that predicts how to alter data based upon the pattern you enter into the cell at the beginning of the column. (1 point) Correct answer Flash Fill 11) The values that an Excel function uses to perform calculations or operations. (1 point) Correct answer Arguments 12) A database function that adds a column of values in a database that is limited by criteria set for one or more cells. (1 point) Correct answer DSUM 13) The use of two or more conditions on the same row—all of which must be met for the records to be included in the results.…

• 680 Words
• 3 Pages
Decent Essays
• Improved Essays

10.3 Types of Testing 10.3.1 White Box Testing: A level of white box test coverage is speci ed that is appropriate for the software being tested. The white box and other testing uses automated tools to instrument the software to measure test coverage. 10.3.2 Black Box Testing: A black box test of integration builds includes functional,interface, error recovery, stress and out-of-bounds input testing. All black box software tests are traced to con- trol requirements. In addition to static requirements, a black box of a fully integrated system against scenario sequences of events is designed to model eld operation.…

• 1596 Words
• 7 Pages
Improved Essays
• Improved Essays

Then, the result will be sorted according to the weight of each pairs. Considered general PROW algorithm while ignoring the pre and post PROW algorithm. The complexity of PROW algorithm can be calculated as O(n) for while statement in line 2 since it iterate until specified value is meet. Then, in for loop the maximum number is when no more remaining pair is found, n. Since the second for loop also have same maximum number, Big-O notation for this for loop is O(n2). Thus, the lower bound and final result of Big-O notation is…

• 1546 Words
• 7 Pages
Improved Essays
• Great Essays

The study is aimed at the comparison of homologous protein BLOCKs using different diversity parameters (MDRs, DHPs and MCRs etc) that are formulated using positional frequencies of observed hetero-pairs and homo-pairs of BLOCKs. APBEST, written in AWK programming language, extracts these BLOCK specific parameters. How efficient is the program? Are these parameters correlate with already existing literature reports? To have resolution of these questions, we have implemented the program first on constructed BLOCK FASTA files.…

• 982 Words
• 4 Pages
Great Essays
• Superior Essays

Therefore p = AA +1/2Aa. Likewise in all the recessive alleles, q= aa +1/2Aa. With this equation, p2+2pq+q2=1, we can say that the sum of alleles should be equal to 1 or 100%. If we used the observe traits we can conclude it is a phenotype frequency, we can use the equation p + q = 1. If we obtain any alleles such as p or q, we can subtract it to 1.…

• 897 Words
• 4 Pages
Superior Essays
• Improved Essays

Technology impacts innumerable aspects of western civilization, including its advancement in the world. In addition to that statement, its historical significance over the years has developed at a constant rate that continues to evolve. Amidst these influential technologies is the significance of the computer and its organizational efficiency. The historical importance of computers comes a long way from all this tool is used for today. The original purpose of a computer was just to simply punch numbers or crack the Enigma code, and now it is one of the most common devices used today for a plethora of purposes; such as socially, professionally, financially, and educationally.…

• 769 Words
• 4 Pages
Improved Essays
• Improved Essays

The fitness had been known by the k-fold cross validation (CV) technique in this study. In k-fold cross validation, the training data set is randomly splitted into K mutually exclusive subsets (folds) of approximately similar size. With a given set of parameters, the regression function had been built, using (K-1) subsets as the training set. By the mean abstract percent error (MAPE) on the last subset (testing set) The efficacy of the parameter set is measured. It had been repeated Ktimes the above procedure, so each subset is used once for testing.…

• 1796 Words
• 8 Pages
Improved Essays
• Great Essays

Several methods exist for dealing with serial correlation. Here, we will deal exclusively with batch means, replication/deletion, and the Mean Squared Error Reduction (MSER) technique. The goal of these methods is to produce valid confidence intervals (CI’s) in the presence of serial correlation. In our analysis, we will use the lag k autocorrelation to find a point at which the observations are…

• 1588 Words
• 6 Pages
Great Essays
• Improved Essays

Six different classification algorithms were used: naïve bayes, support vectors, decision trees, logistic regression, nearest neighbors, rule learner. We ran a 60% training and 40% test split evaluation (f1-s) and provide f1-scores. We computed two baselines for the classification tasks: the majority baseline (maj) and the averaged majority baseline (avg maj). The majority baseline is close to 0.5, because we balanced the two classes for each task. The averaged majority baseline is the average of the performance obtained classifying all the instances before in one class and then in the other.…

• 856 Words
• 4 Pages
Improved Essays