The LinkFromPool concurrency fix (commit 3608c13 / PR #1449) introduced a per-path mutex map to prevent race conditions during concurrent publish operations:
var (
fileLockMutex sync.Mutex
fileLocks = make(map[string]*sync.Mutex)
)
func getFileLock(filePath string) *sync.Mutex {
fileLockMutex.Lock()
defer fileLockMutex.Unlock()
if mutex, exists := fileLocks[filePath]; exists {
return mutex
}
mutex := &sync.Mutex{}
fileLocks[filePath] = mutex
return mutex
}
The problem is that fileLocks is append-only. Every unique destinationPath ever passed to LinkFromPool allocates a *sync.Mutex entry that is never removed. This is a slow but unbounded memory leak - estimated impact is around 150-200 bytes per unique path, so after 100,000 packages are registered I'd expect around ~20MB of extra memory used.
That's small enough that it might be worth just accepting as a tradeoff as attempting to prevent this definitely adds a lot of complexity - either adding a refcount to fileLocks, or storing these in a Least Recently Used list where we can bound the size of filelocks and once we hit that capacty we start evicting the oldest ones.
The LinkFromPool concurrency fix (commit 3608c13 / PR #1449) introduced a per-path mutex map to prevent race conditions during concurrent publish operations:
The problem is that fileLocks is append-only. Every unique destinationPath ever passed to LinkFromPool allocates a *sync.Mutex entry that is never removed. This is a slow but unbounded memory leak - estimated impact is around 150-200 bytes per unique path, so after 100,000 packages are registered I'd expect around ~20MB of extra memory used.
That's small enough that it might be worth just accepting as a tradeoff as attempting to prevent this definitely adds a lot of complexity - either adding a refcount to fileLocks, or storing these in a Least Recently Used list where we can bound the size of filelocks and once we hit that capacty we start evicting the oldest ones.