Multiple vulnerabilities in eLinkSmart padlocks

"Open, Sesame!" - Unlocking Bluetooth LE padlocks with polite requests.


Locks serve as our mundane companions, guarding against our shared fear of intrusion. In some places, the humble lock is gradually being replaced by its smart counterpart - one that's embedded with electronics, wielding the power of keyless entry. However, with this great power comes great responsibility - one of ensuring that new functionality doesn't come at the cost of security. This blog post is a deep-dive into the security implications of a series of smart locks popular in the UK and Germany.

The focus of this work was the eLinkSmart range of Bluetooth-enabled locks. This padlock was selected due to the prominence of the brand on the front page of Amazon, with all of the first five results for "Bluetooth padlock" being eLinkSmart products in the UK at the time of writing. Additionally all of these products were highly reviewed, with thousands of reviews between the four to five star range, as well as being quite affordable.

Several vulnerabilities were found between the locks' implementation of the Bluetooth Low Energy (BLE) communication and eLinkSmart's back-end API. These enable an attacker to unlock any lock within Bluetooth range and gather unlock history information including times and locations of any lock in the world, even if location tracking was not enabled by the user.

Target Locks

The initial focus of this research was the eLinkSmart YL-P10BF lock, the top Amazon result for "bluetooth padlock" at the time of testing. Priced at around £50, it advertises itself as a "large heavy lock" with a good 360g of heft and solid metal body. It advertises three main unlocking techniques: over Bluetooth using the app, using the fingerprint sensor, and using one of the two keys it comes with.

In terms of physical security, it has a single ball-bearing locking mechanism. The lock itself consists of a slider lock with ten moving elements, and an internal motor. The shackle itself is 0.7cm of unhardened stainless steel, which would likely not last long against mid-range bolt cutters, though this is par for the course for consumer-grade padlocks. Overall, a clear attempt has been made to defend against most common physical attacks.

A mobile application, named "eSmartLock", is required to interface with the lock. It first requires an account to be registered with eLinkSmart, then a lock in a factory state can be bound to that account. From there, the account can add or remove fingerprints, unlock with the app, view unlock records (both fingerprint and app unlocks), modify settings on the lock, and authorise other users to unlock using the app. Importantly, the lock settings include location tracking. When enabled, the application records the phone's current location that then gets included in the unlock records.

Over time, more locks from the same manufacturer were tested. The exact model numbers assessed were as follows:

  • YL-P10BF
  • YL-P8BF
  • YL-P5BF
Figure 1: eLinkSmart locks tested. From left to right: YL-P10BF, YL-P8BF, YL-P5BF.

Objectives and Methodology

The objective of this project was to find common vulnerabilities between a selection of highly rated BLE smart locks from Amazon, a leading online storefront. An additional objective was to try to create a consistent methodology that could be applied to similar products.

The methodology used to assess the locks was as follows:

  • Intercept any wireless communications made between the application and the lock.
  • Decompile the application to reverse engineer the protocol.
  • Use these to understand the full authentication chain, and try to find any potential weak points.
  • Develop an attack and evaluate the impact.

Intercepting Wireless Communications

Tools used:

  • nRF52840 Dongle
  • nRF Sniffer for Bluetooth LE
  • Wireshark
  • Burp Suite

The first link in the chain was the BLE communication. To analyse the communications a Nordic Instruments dongle with their nRF Sniffer software, designed to intercept BLE, was used alongside Wireshark, a common packet analysis tool which the sniffing software was written to interface with. Once the monitoring interface was set up, a number of actions were performed and the packets recorded. Below is an example exchange between the lock and phone:

Phone -> Smartlock: 1000cbf505019d56631b5b12b078a63b188b
Smartlock -> Phone: 10004f6cb20b470cb1c0d6d2550b62015937
Phone -> Smartlock: 2000a315ba1d0a31c1f3e75402909cc6442a2918
Phone -> Smartlock: 40c9a49e86ba276f9cc6343d116f
Smartlock -> Phone: 1000247e030ac9fce1ea0b0870d7dec4fe10
Phone -> Smartlock: 30002540b5073440622296c1f9102c385a7fcb8c
Phone -> Smartlock: 6c462f1db670c0dd2494f0fe4fd583e75d867dde
Phone -> Smartlock: ffc76c19148e30f86e74

The messages were constructed such that long messages were split into multiple packets, with the first two bytes of the message being the length. The messages themselves all had two traits in common that strongly indicated encryption was being used:

  • They were all seemingly random.
  • The lengths were all exact multiples of sixteen, implying a block cipher with 16-byte blocks.

The next step to better understand the system would be to intercept the HTTP traffic from the application with Burp Suite. However, the process of capturing traffic was non-trivial. After setting up the Burp Suite proxy and adding the certificate to the mobile device, only the requests were successfully captured, and the server would always respond with the error 400 No required SSL certificate was sent. This implied that mutual TLS was at play - the API expected a TLS certificate to be provided by the application. With that in mind, we pivoted to the app itself.

Reversing the Android Application

Tools used:

  • jadx-gui
  • Frida
  • A rooted Google Pixel 2

Between the mTLS and encrypted BLE communications it was clear that the application itself would need to be decompiled and understood before these communications could be properly assessed.

To understand what was happening under the hood the APK was first pulled from the device, then decompiled. However, the code was obfuscated, with most classes, methods, and variables renamed to single characters. Shown below is an example of the obfuscated code:

public static byte[] T(int i2, String str) {
   byte[] bArr = new byte[18];
   System.arraycopy(Packet.shortToByteArray_Little((short) 16),0,bArr,0,2);
   System.arraycopy(Packet.shortToByteArray_Little((short) 18),0,bArr,2,2);
   System.arraycopy(Packet.intToByteArray_Little(i2), 0,bArr,4,4);
   System.arraycopy(Packet.intToByteArray_Little((int) (c.g.a.a.s.h.x()/1000)),0,bArr,8,4);
   byte[] bytes = str.getBytes();
   System.arraycopy(bytes, 0, bArr, 12, bytes.length);
   c.n.a.i g2 = c.n.a.f.g("BleProtocolUtils");
   g2.j("--packageUnlockCloudPwd-- bUlkCloudPwd:" + c.g.a.a.s.a.c(bArr, ","));
   return p(bArr);

Also shown in this screenshot, however, is the vital clue to undoing the deobfuscation: log statements (seen in the g2.j() method above). While the production build of the application suppressed most logging, causing it not to show up in the Android system logs, the code was littered with verbose log statements using a custom logging library, with a common structure: 'className -- methodName -- message'.

This greatly simplified the process of deciphering and renaming variable names. For example, the code snippet above could be simplified to the following:

public static byte[] packageUnlockCloudPwd(int token, String password) {
  byte[] packet = new byte[18];
  System.arraycopy(Packet.shortToByteArray_Little((short) 16),0,packet,0,2);
  System.arraycopy(Packet.shortToByteArray_Little((short) 18),0,packet,2,2);
  System.arraycopy(Packet.intToByteArray_Little(token), 0, packet, 4, 4);
  System.arraycopy(Packet.intToByteArray_Little((int) (DateUtil.getTimeInMillis() / 1000)), 0, packet, 8, 4);
  byte[] bytes = password.getBytes();
  System.arraycopy(bytes, 0, packet, 12, bytes.length);
  Logger classLogger = CustomLogger.classLogger("BleProtocolUtils");
  classLogger.log("--packageUnlockCloudPwd-- bUlkCloudPwd:" + ByteArrayUtils.asCSV(packet, ","));
  return encryptData(packet);

To capture even more information, and to make sure nothing was missed, the logger calls were hooked using Frida so it could be monitored. There were a handful different methods used to print log statements (presumably different log levels); however, down the line they eventually called the same method. The Frida script to capture information from the custom logger was the following:

Java.perform(function() {
  let CustomLogger = Java.use("c.n.a.g");
  CustomLogger["m"].implementation = function (logLevel, className, message, error) {
    console.log(`${className} -- ${message} -- ${error}`);
    this["m"](logLevel, className, message, error);

An example output from the logs captured in this way can be seen below:

null -- LockController----- openBleNotify: BleDevice{mDevice=A4:C1:38:06:19:2C, mScanRecord=null, mRssi=0, mTimestampNanos=0, lockName='null'} -- null
null -- BleLockScanActivity --- 设备notify成功 ===>A4:C1:38:06:19:2C -- null
null -- BleProtocolUtils--packageLogin-是否设置时间->false -- null
BleProtocolUtils -- --packageLogin login:06,00,01,00,D3,F1,50,65 -- null
BleProtocolUtils -- --packageEncryptMessage encryptData:10,00,1F,B5,0B,67,82,F1,8F,B5,E7,29,B4,6A,4D,ED,25,1A -- null
null -- LockController--write--encryptLogin final data:10,00,1F,B5,0B,67,82,F1,8F,B5,E7,29,B4,6A,4D,ED,25,1A -- null
ModifyMac -- LockController--onCharacteristicChanged-data->40,00,68,3F,C6,33,87,B1,8C,1E,0A,3C,A3,3F,58,29,44,3E,74,2E -- null
null -- LockController--onCharacteristicChanged-cDataLen->2 -- null
null -- LockController--onCharacteristicChanged-bDataLen->2 -- null
null -- LockController--onCharacteristicChanged-->64 -- null
null -- LockController--onBleCharacteristicChanged--total:64 -- null
ModifyMac -- LockController--onCharacteristicChanged-data->F2,6B,5D,DF,B5,7A,08,E7,3C,A3,2A,E5,2A,F7,2F,77,EA,07,7E,E4 -- null

Eagle-eyed readers may have noticed references in the log output and the deobfuscated block to encryption. From here, a class named BleAESCrypt was identified - this handled the encryption for the BLE traffic. It turned out that all BLE data was encrypted using AES/ECB, block size 16, and a hardcoded key. This means that anyone who downloads the eSmartLock application could, with some reverse-engineering knowledge, decrypt all communications between the lock and mobile phone. Additionally, since this was hardcoded both in the app and in the firmware, any attempt to mitigate this issue would require updating the firmware on every lock sold - a significant effort.

From here the decryption of the packets was trivial, and using an equally helpfully labeled class, BleProtocolUtils, so was the reverse-engineering of the protocol.

Finding the certificate required to work around mTLS was also short work. The certificate was being pulled from the APK and pushed into the Android Keystore, password-protected with a hardcoded password. The certificate was pulled and loaded into Burp Suite.

Following the deobfuscation work it became possible to:

  • Inspect and understand the BLE traffic.
  • Intercept and manipulate the API traffic.
  • Receive verbose logging to refer back to for further questions.

The two main roadblocks, BLE encryption and mTLS, were removed. The path to further understanding and, hopefully, exploiting the locks became clear.

The Locks' Protocol

The protocol used by the locks on top of BLE was fairly straight-forward. Each packet comprised a two-byte header containing the total message length, a command code, and additional parameters. The command code was a number between 1 and 43 that specified what operation the lock should perform and which parameters to expect. Below is a dissected packet taken from the moment before the lock unlocked after pressing the Unlock button in the application:

Figure 2: Dissection of an eLinkSmart unlock packet.

To unlock the lock, the application would take the following steps:

  • Connect to the lock over Bluetooth.
  • Negotiate a random 4-byte session token using the "login" command code.
  • Send an unlock packet sending the session token and the lock's password.

The session token was likely implemented to protect against basic replay attacks: , it would be impossible to simply record and replay an encrypted request - further understanding of the encryption in use and the protocol used by the locks would be required.

A similar procedure to the above would be followed to factory reset the lock (thus removing all fingerprints and mobile phone bindings) or initiate the process of introducing new fingerprints to the padlock.

Looking at the password, it is important to note the following:

  • The password was always exactly six decimal digits.
  • The locks tested only ever used one password (referred to as the "admin password" in the source code). This was true even when unlocking as multiple users, using the app's remote authorisation system.
  • It was not possible to set or change this password from within the application.
  • The password also persisted through factory resets, implying it was hardcoded.


Because of the joint impact of the hardcoded encryption and lock password intercepting the communications proved to be a reliable and powerful attack, giving a potential attacker the keys to the kingdom permanently, allowing them to perform the following actions on any lock within Bluetooth range:

  • Unlock the lock at any time.
  • Add any new fingerprint for persistent access.
  • Perform a full factory reset of the lock.

This presents a unique attack surface when compared to traditional locks. If a user suspected that their combination padlock's code was compromised, they could simply change it. However, the hardcoded password of eLinkSmart locks provides no such recourse - there is no mitigation if that password is intercepted. If a user suspected that their lock had been compromised, they would have no intuitive way of protecting themselves.

The lock did, however, implement brute force protection: after ten incorrect password it would reject all further attempts at logging in for around 90 seconds.

The Web API

The next question asked was "Where does the password come from?". It was never set by the user and even after a factory reset of the lock the password seemed to stay the same. Thanks (again!) to the verbose logging within the application it was clear the password was coming from the API, so focus was brought back there.

A snippet from the log when retrieving the lock's password from the API can be seen below:

headers: Content-Type: application/x-www-form-urlencoded;charset=utf-8
body:mac=A4%3AC1%3A38%3AXX%3AXX%3AXX&user_name=[Test Username]&loginToken=[16 byte hex token]&type=2&cp=el
收到响应 200OK 1023ms
响应headers: Server: nginx
Date: Sat, 28 Oct 2023 11:37:25 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Powered-By: PHP/7.2.24

  "state": "success",
  "type": 0,
  "desc": "接口操作成功",
  "data": {
    "name": "lock",
    "mac": "A4:C1:38:XX:XX:XX",
    "isBind": 1,
    "password": "******",
    "reset": 0,
    "lock_status": 0,
    "admin_password": "",
    "apply_mode": 0

It is worth noting that this snippet has been modified to make it more presentable - the actual output was rather noisy. As such it was very valuable to import this into another tool, Burp Suite, and now that the mTLS had been worked around the full HTTP traffic was visible.

The application made extensive use of the API, trusting it with a bulk of the data storage for the lock, so of course this was where the password was coming from. The process for this was as follows:

  • The application logged in by sending the username and password to ?m=user&a=login, receiving a session token.
  • The list of locks owned by the user was retrieved by sending the user ID to ?m=Socket&a=lockList.
  • The specific lock information was retrieved by sending each lock's MAC address to ?m=lock&a=getLockInfoByMac, including the password.

The next step was to try to retrieve the password for a lock that was not owned by a user. To do this, a second user account was created, and using Burp Suite a request was sent as this new user requesting the lock's information. Ideally the endpoint would return either no information at all, or a very limited subset of the lock's information appropriate for an unauthorised user.

Figure 3: An example request to obtain a lock's password as an unauthorised user.

This was, unfortunately, not what was found. Instead the API happily disclosed all information to the new user. Any authorisation checks performed by the application were only enforced client-side.


This simple test illustrated a lack of authorisation controls - any user could access the information of any lock, regardless of ownership. This, combined with previous findings, meant that anyone could unlock any lock, without ever needing to intercept communications of a legitimate user.

Another significant endpoint was ?m=lock&a=getLockOpenLog, which returned the full list of app-based unlocks for a given lock, taking a MAC address as input. This was significant because if a lock had location tracking turned on (Did we mention the app featured location tracking for mobile unlocks? Yes, it did...), then that information would have been available to anyone. This created the potential for an even more sophisticated attack: an attacker could potentially iterate over all locks, find the ones closest to them, and unlock them.

There were quite a few other API endpoints. Were they just as vulnerable? The short answer is: yes. The only endpoint that appeared to correctly implemented authentication and authorisation was ?m=voice&a=getVoiceNew, which provided a voice clip saying "Please press your fingerprint on the scanner" - possibly the endpoint with the lowest security risk.

It's also worth noting that since almost any data was available through the API, any sensitive information stored in user groups, user names, bound email addresses, lock names (some of which were interesting), or anywhere else would be available.


With all of the information above, a simple proof of concept could be constructed - a script which detected all nearby eLinkSmart locks (identifying them by MAC address - the first three octets of a MAC address typically identify the manufacturer), requested their unlock passwords from the API (as a user who did not own any locks), and finally assembled and broadcast a valid request for each lock in range.

This approach can be seen in action below:

It Gets Worse

Knowing that the Web API is exposed to significant vulnerabilities, further testing was performed to try an identify any other issues that may affect it. This revealed a number of significant issues, including an SQL injection vulnerability which could be used to obtain sensitive information, including users' password hashes (unsalted MD5).


SQL injection used to be one of the most common vulnerabilities one may encounter in a Web API. In short, an attacker with the ability to construct the right statements would be able to extract (and potentially modify) any data stored in the backend's database. This includes users' account details, details of all locks, and more. Any password used in the eSmartLock application should be considered compromised.


The combination of issues identified throughout this project paint a bleak picture. Although parts of the system show clear attempts at implementing security measures, these are overshadowed by the easy-to-exploit vulnerabilities in key areas of the locks and its supporting application. This is not atypical for cheap smart locks, and similar issues have been observed with other brands before. Things are further complicated by the fact that manufacturers rarely respond to the security community on these issues.

Remediation Recommendations

For Consumers

The one simple recommendation for consumers here would be to avoid eLinkSmart padlocks (and, likely, other cheap Chinese smart locks) until the situation improves. This is not likely to happen anytime soon, unless appropriate standards and regulations are put in place and enforced on IoT devices.

All locks tested during this writeup could be placed in a fingerprint-only mode - in this state, the lock is not paired with the application, and does not accept most Bluetooth requests (although it does accept some; at the time of writing, WithSecure did not identify a way to access any sensitive functionality via BLE when the lock was in this mode). This significantly reduces the device's attack surface, and allows for continued use of the lock via fingerprints.

Alternatively, the YL-P10BF could be modified to remove its "smart" functionality. Removing the battery and USB port should be sufficient, leaving the user with an adequate key-only padlock for low-security scenarios. Its resistance to destructive attacks would be relatively low, but likely not lower than that of other cheap padlocks.

This could be accomplished relatively easily by partially disassembling the lock and disconnecting a few wires. Shown below is an example modification, with the three plugs for the battery, the motor, and the USB-C charging port removed from their sockets:

Figure 4: The YL-P10BF lock with the USB, motor, and battery disconnected.

Users should also assume that their personal details, including time-specific location data for any lock, may have been accessed by third parties. In theory, this means that an attacker could attempt to find vulnerable locks in their area and access anything they protect. If this scenario is of concern, eLinkSmart users should delete their accounts through the application and replace all locks with their boring, "dumb" counterparts.

Finally, due to the SQL injection vulnerability being present in the API, users should assume that any passwords used in the eSmartLock application have been compromised. If the same password was used in any other service, it should be changed immediately.

For the Manufacturer

The SQL injection issue must be resolved as a matter of urgency - this is most easily accomplished through the use of prepared statements. Additionally, eLinkSmart should prioritise modifying their Web API to ensure appropriate authorisation and access controls are in place. Users should only be able to access information of locks they own. The API issues are by far the biggest problem with this system. These are also some of the easiest issues to address, requiring only minor changes to the mobile application and its backend's programming.

Ideally, the locks should not rely on hardcoded encryption keys - these will always be retrievable through reverse-engineering of the application, rendering sniffing and replay attacks viable. Ideally, encryption keys would be generated for each lock at the time of its pairing with a mobile device. These improvements would require a firmware update to be deployed to all locks, and would involve a substantial development effort. Realistically, this is something that could be seen in a new model of locks.

Finally, the reliance on static, hardcoded, unchanging passcodes means that users have no way of protecting themselves if they suspect their lock has been compromised. Ideally, these should be under control of the lock's owner. Additionally, two users of the same lock should be able to use different authentication secrets.

Disclosure Timeline

  • 1st September 2023 - Initial contact - Multiple points of contact within eLinkSmart e-mailed with a high-level description of the issues and proof-of-concept code.
  • 19th September 2023 - Follow-up after no response from vendor.
  • 11th October 2023 - Follow-up after no response from vendor. Intention to publicise findings communicated.
  • 8th December 2023 - Public presentation of findings at BSides London.
  • 6th February 2024 - Blog post publication.


Closing Notes

The content of this blog post was demonstrated at BSides London 2023 - the recording can be found on YouTube.

This project originated as part of the WithSecure Cyber Security Internship in 2023. More information about the internship scheme and how to apply for the 2024 round can be found in our job posting. (Deadline: 15th March 2024)