Racist mortgage lenders charge 8% higher interest to ethnic minorities

mortgage lenders, minority groups

Borrowers from minority groups were charged 8% higher interest rates from mortgage lenders and were rejected for loans – 14% more often than those from privileged groups

Despite the U.S. Equal Credit Opportunity Act existing in order to prevent discrimination from mortgage lenders, biases still often impact borrowers from racial minority backgrounds in buying a house, even when they have the same wealth as white buyers.

These biases are then frequently placed into machine-learning models which lenders use to organise decision-making – invoking consequences for housing fairness which can contribute to widening the racial wealth gap.

For mortgage lenders, if a model is trained on an unfair dataset – like one where a higher proportion of Black borrowers were denied loans versus white borrowers with the same income or credit score – those biases will affect the model’s predictions when it is applied to real situations.

For instance, if a minority borrower is granted a loan when their race is changed to white, the machine model considers that data point biased and removes it from the dataset.

Aiming to end racism and mortgage lending discrimination, researchers from MIT generated a method which can remove bias in data which is used to train these machine-learning models.

“There is no point in making an algorithm that can automate a process if it doesn’t work for everyone equally.”

Unfair datasets amongst buyers, despite similar credit scores

The Journal of Financial Economics study yields a new technique for mortgage lenders, where it is able to remove bias from a dataset that has multiple sensitive attributes, such as race and ethnicity, as well as several “sensitive” options for each attribute, such as Black or white, and Hispanic or Latino or non-Hispanic or Latino.

Sensitive attributes and options are features which distinguish a privileged group from an underprivileged group, as seen in trends of racial disparities.

The technique, DualFair, subsequently trains a machine-learning classifier that makes fair predictions of whether borrowers will receive a mortgage loan. When it is applied to mortgage lending data from several U.S. states, their method significantly reduced the discrimination in the predictions while maintaining high accuracy.

Jashandeep Singh, a senior at Floyd Buchanan High School and co-lead author of the paper with his twin brother, Arashdeep, said: “As Sikh Americans, we deal with bias on a frequent basis and we think it is unacceptable to see that transform to algorithms in real-world applications.

“For things like mortgage lending and financial systems, it is very important that bias not infiltrate these systems because it can emphasize the gaps that are already in place against certain groups.”

DualFair as an evolving technology to address racism and other societal issues

DualFair tackles two types of bias in a mortgage lending dataset: label bias and selection bias.

Label bias occurs when the balance of favourable or unfavourable outcomes for a particular group is unfair – such as Black applicants being denied loans more frequently than they should be, which occurs with other minority groups also.

Conversely, selection bias is created when data are not representative of the larger population – like the dataset only including individuals from one neighbourhood where incomes are historically low.

DualFair then eliminates label bias by subdividing a dataset into the largest number of subgroups based on combinations of sensitive attributes and options, such as white men who are not Hispanic or Latino, Black women who are Hispanic or Latino, etc.

Following the generation of the subgroups, DualFair then evens out the number of borrowers in each subgroup by duplicating individuals from minority groups and deleting individuals from the majority group and balances the proportion of loan acceptances and rejections in each subgroup to match the median in the original dataset before recombining the subgroups.

Then, it eliminates selection bias by iterating on each data point to see if discrimination is present –such as, if an individual is a non-Hispanic or Latino Black woman who was rejected for a loan, the system will adjust her race, ethnicity, and gender one at a time to see if the outcome changes.

If this borrower is granted a loan when her race is changed to white, DualFair considers that data point biased and removes it from the dataset.

By breaking down the dataset into as many subgroups as possible, DualFair can simultaneously address discrimination based on multiple attributes.

Using a fairness metric called the average odds difference can help

Using the publicly available Home Mortgage Disclosure Act dataset to test DualFair, this dataset spans 88% of all mortgage loans in the U.S. in 2019, and includes 21 features, including race, sex, and ethnicity.

The fairness of predictions increased while the accuracy level remained high across all states with the DualFair method, making racial discrimination against minority groups from mortgage lenders more difficult.

The researchers also used an existing fairness metric known as average odds difference. However, this difference can only measure fairness in one sensitive attribute at a time – so they created their own fairness metric, called alternate world index, which accounts for bias from multiple sensitive attributes and options as a whole.

Using this metric, they found that DualFair increased fairness in predictions for four of the six states while maintaining high accuracy.

Gupta stated: “Researchers have mostly tried to classify biased cases as binary so far. There are multiple parameters to bias, and these multiple parameters have their own impact in different cases. They are not equally weighed. Our method is able to calibrate it much better.”

Khan finalised: “It is the common belief that if you want to be accurate, you have to give up on fairness, or if you want to be fair, you have to give up on accuracy. We show that we can make strides toward lessening that gap.”

“Technology, very bluntly, works only for a certain group of people. In the mortgage loan domain in particular, African American women have been historically discriminated against. We feel passionate about making sure that systemic racism does not extend to algorithmic models.

“There is no point in making an algorithm that can automate a process if it doesn’t work for everyone equally.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here