Interestingly our poster here has put the reason, I hadn’t thought about it that way, but how valuable you are to creditors is what the score is. Paying off early losses then some money, so score goes down. Hilarious. What an amazing system! ☹️
That’s not why it went down. It probably went down because they had less credit extended to them after paying off the loan. How much credit you’re using affects your score.
They don’t care that u paid it off early. They care that your loan to income ratio just took a hit.
That doesn’t really make sense either. Why would a high amount of debt relative to income be a good thing? How does it indicate a person is more likely or capable of paying off a loan? If anything it means the opposite.
Oh goody. Just thought of another amazing use for ai! You could use it to figure out the maximum length and interest a single borrower would be expected to pay and set the terms on that! Wunderbar!
You know there are people in bank and credit institutions that have been doing this for centuries? Probably millennia… EU explicitly requires that some of this is done by what you call AI (i.e. mathematical models) because they are fairer than humans and safer for customers and society
Just a small correction because I worked around that area (not for loans but for investment), it’s all Algorithms rather than AI.
Algorithms are basically mathematical formulas turned into code, whilst AI is a totally different beast that can produce quite different results on slightly different inputs and it’s not really made by turning mathematical models into code but rather it’s trained with real world data containing inputs and outputs and “somehow” finds the patterns in that data and can predict the correct outputs if given fresh, never seen before inputs.
AI is probably used for fraud detection (and I expect nowadays it’s likely used in algorithmic trading to try and predict market movements) but unless a lot has changed since I was in the business, it’s not used for valuations.
It was just to give an idea that what OP mentioned is already an established thing, fairer than alternatives.
Most of the time trivial linear logistic regression is used in this context. Nowadays decision tree ensambles are pretty heavily used, which are ML. Simply they perform better with fewer data than neural networks on structured tabular data.
What you refer to as AI is probably methods based on deep learning. The truth is that they work exactly as any other algorithm that you are referring to. They are used for regression and classification, same way as a standard linear regression. The difference is that the models are non linear, and their complexity is so that a lot of data are needed to train them.
But conceptually one can absolutely create a credit score with deep neural networks. It is just an overkill, for performances that are likely worst than a random forest on relatively small training datasets
Neural networks-based methods are indeed used in fraud detection
We don’t need more discrimination in loan approval. A few years ago, Amazon built an AI that would look at resumes and rate how likely the candidate would be hired. The AI trained itself to recognize female sounding resumes (went to women’s only college, is involved in women’s organizations, does not use manly enough language) and flag those as undesirables.
Ah ok. I don’t know much about it, but I’ve heard that AI could sometimes be negative toward commonly discriminated against groups because the data that it’s trained with is. (Side note: is that true? someone pls correct me if it’s not). I jumped to the conclusion that this was the same thing. My bad
An AI is only as good as its training data. If the data is biased, then the AI will have the same bias. The fact that going to a women’s college was considered a negative (and not simply marked down as an education of unknown quality) is proof against the idea that many in the STEM field hold (myself included) that there is a lack of qualified female candidates but not an active bias against them.
Interestingly our poster here has put the reason, I hadn’t thought about it that way, but how valuable you are to creditors is what the score is. Paying off early losses then some money, so score goes down. Hilarious. What an amazing system! ☹️
The system isn’t for seeing how responsible you are, it’s for seeing how reliable you are.
They seem like similar ideas but they are quite different.
Responsibility is something capitalism can’t afford.
Fair enough!
That’s not why it went down. It probably went down because they had less credit extended to them after paying off the loan. How much credit you’re using affects your score.
They don’t care that u paid it off early. They care that your loan to income ratio just took a hit.
That doesn’t really make sense either. Why would a high amount of debt relative to income be a good thing? How does it indicate a person is more likely or capable of paying off a loan? If anything it means the opposite.
Because it’s a racket
A high amount of debt to income is absolutely a bad thing, both in life and for your credit score
Lol 5hey want u to carry a higher balance. It’s not that hard.
Sounds good!
Oh goody. Just thought of another amazing use for ai! You could use it to figure out the maximum length and interest a single borrower would be expected to pay and set the terms on that! Wunderbar!
You know there are people in bank and credit institutions that have been doing this for centuries? Probably millennia… EU explicitly requires that some of this is done by what you call AI (i.e. mathematical models) because they are fairer than humans and safer for customers and society
Check basel III for an intro on the topic
Just a small correction because I worked around that area (not for loans but for investment), it’s all Algorithms rather than AI.
Algorithms are basically mathematical formulas turned into code, whilst AI is a totally different beast that can produce quite different results on slightly different inputs and it’s not really made by turning mathematical models into code but rather it’s trained with real world data containing inputs and outputs and “somehow” finds the patterns in that data and can predict the correct outputs if given fresh, never seen before inputs.
AI is probably used for fraud detection (and I expect nowadays it’s likely used in algorithmic trading to try and predict market movements) but unless a lot has changed since I was in the business, it’s not used for valuations.
It was just to give an idea that what OP mentioned is already an established thing, fairer than alternatives.
Most of the time trivial linear logistic regression is used in this context. Nowadays decision tree ensambles are pretty heavily used, which are ML. Simply they perform better with fewer data than neural networks on structured tabular data.
What you refer to as AI is probably methods based on deep learning. The truth is that they work exactly as any other algorithm that you are referring to. They are used for regression and classification, same way as a standard linear regression. The difference is that the models are non linear, and their complexity is so that a lot of data are needed to train them.
But conceptually one can absolutely create a credit score with deep neural networks. It is just an overkill, for performances that are likely worst than a random forest on relatively small training datasets
Neural networks-based methods are indeed used in fraud detection
Well, I learn something every day!
Cheers!
No probs :)
Ummm… no. Not in tune with that, so interesting to know that I’m not the first! Haha, thanks!
We don’t need more discrimination in loan approval. A few years ago, Amazon built an AI that would look at resumes and rate how likely the candidate would be hired. The AI trained itself to recognize female sounding resumes (went to women’s only college, is involved in women’s organizations, does not use manly enough language) and flag those as undesirables.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Jesus christ that’s dystopian
It’s not so much dystopian as it is just buggy software
Ah ok. I don’t know much about it, but I’ve heard that AI could sometimes be negative toward commonly discriminated against groups because the data that it’s trained with is. (Side note: is that true? someone pls correct me if it’s not). I jumped to the conclusion that this was the same thing. My bad
what it did it expose just how much inherent bias there is in hiring. even just name and gender alone.
That is both true and pivotal to this story
It’s a major hurdle in some uses of AI
An AI is only as good as its training data. If the data is biased, then the AI will have the same bias. The fact that going to a women’s college was considered a negative (and not simply marked down as an education of unknown quality) is proof against the idea that many in the STEM field hold (myself included) that there is a lack of qualified female candidates but not an active bias against them.
When buggy software is used by unreasonably powerful entities to practise (and defend) discrimination that’s dystopian…
Except it wasn’t actually launched, and they didn’t defend its discrimination but rather ended the project.
We don’t need but we’re going to get!!!