Achieving fairness in algorithmic decision-making tools is an issue constantly gaining in need and popularity. Today, unfair decisions made by such tools can even be subject to legal consequences. We propose a new constraint that integrates fairness into data envelopment analysis (DEA). This allows calculating relative eficiency scores of decision-making units (DMUs) with fairness included. The proposed fairness constraint does not allow disparate impact to occur in eficiency scores, and enables the creation of a single data envelopment analysis for both privileged and discriminated groups of DMUs simultaneously. We show that the proposed method (FairDEA) produces an interpretable model that was tested on a synthetic dataset and on a real-world example, namely, the ranking between hybrid and conventional car designs. We provide the interpretation of the FairDEA method by comparing it to the basic DEA and the balanced fairness and eficiency method (BFE DEA). Along with calculating the disparate impact of the method, we performed a Wilcoxon rank-sum test to inspect for fairness in rankings. The results show that the FairDEA method achieves similar eficiency scores as other methods, but does not have disparate impact. Statistical analysis indicates that differences in ranking between groups are not statistically different, which means that ranking is also fair. This method contributes both to development of data envelopment analysis, and the inclusion of fairness in eficiency analysis.