click to enlarge |
Saturday, March 16, 2019
Avoid AI #Recruiters - @wadeandwendy from @randstad is no Alexa
Randstad is a third-party recruiting firm companies use when they can't or aren't interested in directly recruiting. Their Twitter header boldly proclaims "human forward." though I'm not sure what that means.
Randstad themselves have outsourced (fourth-party?) the communications of recruiting to Wade and Wendy, which bills itself as the first AI Recruiter. In reality, it's not.
First, their emails have MailChimp code showing. I've tweeted this to them but they continue to send me emails with it, so maybe there's no I, A or otherwise, there.
Second, the emails only have the option for "Yes" - there are FAQs and even an email address, but it doesn't appear that the email box is monitored. You'd think an Artificial Intelligence company would have something in place to handle incoming messages. I wrote to them to say that the role wasn't a fit for me, the location was all wrong and even more importantly, the company they were recruiting for wasn't a company I'd ever consider working for.
I never heard back.
Until I got a reminder from Wendy.
So, I looked more closely at the email. (This is the second company they emailed me about - I would work for Philips - I've stared at their monitors in hospitals on so many occasions that I have a strong appreciation for them. My first portable CD player back in 1990 - took 10 AA batteries - was also a Philips.)
No real way to select "No" in the email so I clicked yes. I was prompted to register with my LinkedIn, Google or Microsoft accounts or to create a Username and Password. So, signed in/registered, and then I met Wendy.
Wendy is a glorified chatbot. Maybe not even glorified. Maybe not even a chatbot. More of a progressive form. Here's my chat:
Red Circle = this was my only choice.
Blue Circle = I had yes/no options.
Green Circle = I had multiple choices, perhaps as many as 9, but it captured the first one and then moved on. You'd think an AI would want to collect as much data as it could, allowing me to select multiple options, such as Company, Role, Location, I'm Happy Where I Am, and so on. If I could only pick one, I would have picked Company.
However, I clicked on Role first, so that's what it selected.
Never once did it prompt me for any free-entry typing.
The survey link at the bottom isn't linked - probably to reduce the number of submissions telling them their product is a steaming pile of dog droppings.
So, with no "No" option in the email, I've clicked Yes. I'm now asked to sign-in again. The Artificial Intelligent Recruiter has forgotten who I am.
So, went through the "chat" - the first thing I notice is that the content and prompts are 100% the same. The Artificial Intelligent Recruiter has forgotten it's already had this conversation with me and so it re-introduces itself and tells me the exact same thing it told me last time. (Which wasn't interesting the first time and wasn't new this time.)
While I wanted to click "Location" or "Other," for the sake of this experiment, I went with "Role" - we'll see if Wendy is capable of learning of it will keep sending me Project Manager roles. (Unless this post gets me suppressed from their mailing list... yes, please.)
From now on, if I get any more of these, I'm going to click "No" with the reason "Other" - and if it lets me type, I'm going to tell them I'm not interested because of Wendy.
click on any image in this article to enlarge |
Randstad themselves have outsourced (fourth-party?) the communications of recruiting to Wade and Wendy, which bills itself as the first AI Recruiter. In reality, it's not.
First, their emails have MailChimp code showing. I've tweeted this to them but they continue to send me emails with it, so maybe there's no I, A or otherwise, there.
Second, the emails only have the option for "Yes" - there are FAQs and even an email address, but it doesn't appear that the email box is monitored. You'd think an Artificial Intelligence company would have something in place to handle incoming messages. I wrote to them to say that the role wasn't a fit for me, the location was all wrong and even more importantly, the company they were recruiting for wasn't a company I'd ever consider working for.
I never heard back.
Until I got a reminder from Wendy.
So, I looked more closely at the email. (This is the second company they emailed me about - I would work for Philips - I've stared at their monitors in hospitals on so many occasions that I have a strong appreciation for them. My first portable CD player back in 1990 - took 10 AA batteries - was also a Philips.)
No real way to select "No" in the email so I clicked yes. I was prompted to register with my LinkedIn, Google or Microsoft accounts or to create a Username and Password. So, signed in/registered, and then I met Wendy.
Wendy is a glorified chatbot. Maybe not even glorified. Maybe not even a chatbot. More of a progressive form. Here's my chat:
Red Circle = this was my only choice.
Blue Circle = I had yes/no options.
Green Circle = I had multiple choices, perhaps as many as 9, but it captured the first one and then moved on. You'd think an AI would want to collect as much data as it could, allowing me to select multiple options, such as Company, Role, Location, I'm Happy Where I Am, and so on. If I could only pick one, I would have picked Company.
However, I clicked on Role first, so that's what it selected.
Never once did it prompt me for any free-entry typing.
The survey link at the bottom isn't linked - probably to reduce the number of submissions telling them their product is a steaming pile of dog droppings.
Which makes it all the more interesting that Wendy is again writing to me today about the same role at another company. The Artificial Intelligence Recruiter wasn't paying attention when I told it that it was the wrong kind of role.
So, with no "No" option in the email, I've clicked Yes. I'm now asked to sign-in again. The Artificial Intelligent Recruiter has forgotten who I am.
So, went through the "chat" - the first thing I notice is that the content and prompts are 100% the same. The Artificial Intelligent Recruiter has forgotten it's already had this conversation with me and so it re-introduces itself and tells me the exact same thing it told me last time. (Which wasn't interesting the first time and wasn't new this time.)
From now on, if I get any more of these, I'm going to click "No" with the reason "Other" - and if it lets me type, I'm going to tell them I'm not interested because of Wendy.
ar·ti·fi·cialWendy leaves a bad impression and suggests if recruiting is boiled down to a crappy form-based interface, it's only going to recruit people willing to suffer the crappy form-based interface. (CyberCoder's field-merge approach or LinkedIn's InMail are much better because if you reply, you're immediately talking to a person.)
adjective
1. made or produced by human beings rather than occurring naturally, especially as a copy of something natural, not existing naturally; contrived or false.
synonyms: synthetic, fake, false, imitation, mock, simulated, faux, ersatz, substitute
2. insincere or affected.
synonyms: feigned, insincere, false, affected, mannered, unnatural, stilted, contrived, pretended, put-on, exaggerated, actorly, overdone, overripe, forced, labored, strained, hollow, spurious
3.conventional as opposed to natural.
(from Google)
Saturday, March 09, 2019
User Experience: Does your "flow" make sense?
I recently read an article on jobs to avoid. In one case, it said "avoid job x as it's projected to be outsourced more and more and instead try job y." I thought to myself... how odd... job x and job y are the same thing... aren't they?
So I searched "x vs y" on Google. The top result was in a box and based on the snippet, looked to be exactly what I was looking for.
I read the article, a user-generated submission and thought it was spot on and confirmed my theory - x and y are the same thing and this other article is written by someone who doesn't know what they're talking about.
At the end of the article was an opportunity to provide feedback:
So I clicked "I found this answer useful."
Immediately, I was taken to a new page that simply had this (plus their standard header and footer):
So I searched "x vs y" on Google. The top result was in a box and based on the snippet, looked to be exactly what I was looking for.
I read the article, a user-generated submission and thought it was spot on and confirmed my theory - x and y are the same thing and this other article is written by someone who doesn't know what they're talking about.
At the end of the article was an opportunity to provide feedback:
So I clicked "I found this answer useful."
Immediately, I was taken to a new page that simply had this (plus their standard header and footer):
I immediately hit the back button because it was not what I was expecting. Had I clicked the wrong thing? Nope, trying again resulted in the same screen.
What have they done wrong?
They have failed to set expectations. In soliciting feedback, there's no indication that an account is required (or that one will need to be created to provide feedback). This could easily be solved with the addition of a few tweaks:
1. Replace the signup page with a popup/overlay modal so you never leave the page you're on.
2. Indicate on the signup prompt that an account is required to leave feedback.
3. After "Did you find this answer useful?" simply add "(Create account or signin to leave feedback)" if the user is unauthenticated.
Even if all they had done is step 3, they'd probably get more signups.
I bet if they were to look at their page analytics, they'd see traffic come in from Google, read an article, click on the feedback buttons, see the sign up screen, go back, click on the feedback buttons, see the signup screen again and then the session ends.
The lack of clarity is costing them the chance to create a relationship with a brand new user who's never heard of them before but would have been acquired simply by the nature of the user-generated content.
Subscribe to:
Posts (Atom)