Skip to content

Conversation

@pertrai1
Copy link
Owner

@pertrai1 pertrai1 commented Jan 5, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added a new LeetCode problem solution for Top K Frequent Words, a Medium-difficulty problem.
    • Solution includes complete documentation with problem description, examples, and complexity analysis.
  • Documentation

    • Updated progress tracker reflecting the new problem addition, increasing total solved count to 169 with 75 Medium problems completed.

✏️ Tip: You can customize this high-level summary in your review settings.

@pertrai1 pertrai1 requested a review from Copilot January 5, 2026 14:03
@pertrai1 pertrai1 added the code challenge this is a coding challenge label Jan 5, 2026
@continue
Copy link

continue bot commented Jan 5, 2026

All Green - Keep your PRs mergeable

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts


Unsubscribe from All Green comments

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

@codex review

1 similar comment
@github-actions
Copy link

github-actions bot commented Jan 5, 2026

@codex review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 5, 2026

📝 Walkthrough

Walkthrough

This PR reorganizes a previously solved LeetCode problem by moving its documentation to the standard directory structure, exporting the solution function, and updating the repository's progress metrics to reflect the new Medium problem addition.

Changes

Cohort / File(s) Summary
Problem Documentation & Solution
leetcode/medium/0692-top-k-frequent-words/README.md, leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
Added problem documentation and exported the topKFrequent function from the solution file to make it part of the public API.
Repository Progress Tracking
README.md
Updated LeetCode problem counts: total solved from 168 to 169, Medium problems from 74 to 75, and added a new entry for problem 0692.
Directory Cleanup
0692-top-k-frequent-words/README.md
Removed orphaned documentation file as part of reorganizing the problem into the standard directory structure.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~4 minutes

Poem

🐰 A problem tucked away now finds its place,
Exported proudly in the right space,
Counts increment with organized grace,
169 solved—let's quicken the pace! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'LeetCode Top K Frequent Words' directly matches the main change: adding a new LeetCode problem solution for the 'Top K Frequent Words' problem (0692). It clearly identifies the primary addition to the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch leetcode-01052025

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

Claude finished @pertrai1's task in 1m 18s —— View job


LeetCode Solution Review: Top K Frequent Words

✅ Algorithm Correctness

The solution correctly solves the problem:

  • ✅ Builds frequency map properly
  • ✅ Sorts by frequency (descending) then lexicographically (ascending)
  • ✅ Returns exactly k elements
  • ✅ Handles all test cases from the problem description

Edge cases verified:

  • Single word arrays ✅
  • All words with same frequency (relies on lexicographic order) ✅
  • k = 1 or k = total unique words ✅

⚠️ Complexity Analysis

Current complexity in code comments:

// @time - O(n log n) where n is words
// @space - O(n) n is each item in the Map

Actual complexity:

  • Time: O(n log n) ✅ - Correct

    • Building frequency map: O(n)
    • Converting Map to array: O(u) where u = unique words
    • Sorting: O(u log u) dominates
    • Extracting top k: O(k)
    • Since u ≤ n, overall is O(n log n)
  • Space: O(n) ✅ - Correct

    • Frequency map: O(u) where u ≤ n
    • Sorted array: O(u)
    • Result array: O(k)
    • Overall: O(n)

However: The problem's follow-up asks for O(n log k) time complexity, which this solution does NOT achieve. The current approach is the straightforward solution, not the optimal one.

🎯 DSA Pattern Recognition

Primary Pattern: Hash Map + Sorting

Why this pattern?

  1. Hash Map (Frequency Counting): When you need to count occurrences of elements, a hash map is the go-to data structure for O(1) average-case insertions and lookups.
  2. Sorting: When you need to order elements by multiple criteria (frequency + lexicographic), sorting is a clear choice.

Key Insight:
The clever part of this solution is the dual-criteria sort:

.sort((a, b) => b[1] - a[1] || a[0].localeCompare(b[0]))
  • b[1] - a[1]: Sort by frequency (descending)
  • || a[0].localeCompare(b[0]): If frequencies are equal, sort lexicographically (ascending)

The || operator acts as a "then by" clause - only evaluates the second condition when the first returns 0 (equal frequencies).

📝 Code Quality

Strengths:

  • Clean, readable variable names (freqMap, sortedMap)
  • Good use of modern ES6+ features (Map, for...of, Array.from)
  • Proper TypeScript typing

Minor suggestions:

  1. Comment clarity: Line 11 could benefit from an inline comment explaining the dual-criteria sort
  2. Variable naming: sortedMap is actually an array of tuples, not a Map - consider sortedEntries for clarity
  3. Simplification: Lines 14-16 could be simplified using slice and map

🔄 Alternative Approaches

Approach 1: Min-Heap (Optimal for Follow-up)

Time: O(n log k) | Space: O(n)

Use a min-heap of size k to track the top k elements:

  • Build frequency map: O(n)
  • For each unique word, add to heap (size k): O(n log k)
  • Extract results: O(k log k)

Trade-off: More complex implementation but achieves optimal O(n log k) time.

Approach 2: Bucket Sort + Sorting

Time: O(n + u log u) | Space: O(n)

Group words by frequency using buckets, then sort within each bucket:

  • Build frequency map: O(n)
  • Create buckets: O(u)
  • Sort within buckets: O(u log u)
  • Extract top k: O(k)

Trade-off: Better when there are many duplicate frequencies, but similar worst-case complexity.

📚 Related Problems (Same Pattern)

  1. LeetCode 347: Top K Frequent Elements (Medium)

    • Nearly identical problem but with integers instead of strings
    • No lexicographic sorting requirement
    • Great for practicing heap-based optimization
  2. LeetCode 451: Sort Characters By Frequency (Medium)

    • Return all characters sorted by frequency
    • Similar frequency counting + sorting pattern
    • No k constraint, simpler version
  3. LeetCode 973: K Closest Points to Origin (Medium)

    • Different domain but same "top k" pattern
    • Excellent for learning heap-based optimization (QuickSelect or Max-Heap)

🎓 Learning Summary

Pattern Family

This problem belongs to the "Top K Elements" pattern family, typically solved with:

  1. Hash Map + Sorting (this solution) - Simple but O(n log n)
  2. Hash Map + Min-Heap (optimal) - O(n log k)
  3. Hash Map + QuickSelect - O(n) average case

Key Concepts Practiced

  • ✅ Frequency counting with hash maps
  • ✅ Multi-criteria sorting with || operator
  • ✅ Tuple manipulation with Array.from(map)
  • ✅ TypeScript generics with Map<string, number>

Common Mistakes with This Pattern

  1. Forgetting lexicographic ordering - Many solutions only sort by frequency
  2. Incorrect sort order - Mixing up ascending/descending (a - b vs b - a)
  3. Off-by-one errors - Extracting k+1 or k-1 elements
  4. Not considering the follow-up - The straightforward sort works but isn't optimal

Next Steps

To master this pattern, try implementing the min-heap approach to achieve O(n log k) complexity. This is the optimal solution and commonly asked in interviews as a follow-up.


Overall Assessment: ✅ Correct, clean, and production-ready solution. While it doesn't meet the follow-up optimization challenge, it's an excellent starting point that prioritizes readability and correctness. The multi-criteria sort is elegant and showcases good understanding of JavaScript's sort comparator.

@chatgpt-codex-connector
Copy link

To use Codex here, create a Codex account and connect to github.

1 similar comment
@chatgpt-codex-connector
Copy link

To use Codex here, create a Codex account and connect to github.

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

🧮 Solution Analysis

PR: LeetCode Top K Frequent Words
Files Changed: leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts

📊 Code Quality Results

ESLint Analysis

  • Errors: 0
  • Warnings: 0

🔍 Complexity Analysis

💡 Analysis Tips

  • Time Complexity: Focus on the dominant operation in loops
  • Space Complexity: Consider auxiliary data structures
  • Optimization: Look for opportunities to improve efficiency
  • Edge Cases: Ensure your solution handles empty inputs, null/undefined, edge conditions

📁 Analysis Files Created

Detailed analysis files have been automatically generated and committed to docs/analysis/ for future reference.

🎯 Next Steps

  1. Review any ESLint errors/warnings above
  2. Consider the complexity analysis - is this the optimal approach?
  3. Test with edge cases mentioned in the problem description
  4. Add comments explaining complex logic
  5. Check the docs/analysis/ directory for detailed complexity analysis files

Analysis generated for PR files only

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

✅ Test Results

Tests have been executed for this pull request.

📊 Coverage Summary

Metric Coverage
Statements 93.54%
Branches 87.8%
Functions 82.35%
Lines 93.64%

Check the workflow logs for detailed test output.

@github-actions
Copy link

github-actions bot commented Jan 5, 2026


🤖 Gemini Code Review (Model: gemini-2.5-flash)

This is a well-structured and clear solution to the "Top K Frequent Words" problem. Let's break it down for learning purposes.


1. Pattern Identification

This solution primarily uses the Frequency Counting pattern followed by Custom Sorting.

  1. Frequency Counting: A Map (hash map) is used to efficiently count the occurrences of each word. This is a common pattern when you need to aggregate counts or group items.
  2. Custom Sorting: After counting frequencies, the entries are converted into an array and then sorted using a custom comparison function. This function handles the two sorting criteria: primary by frequency (descending) and secondary by lexicographical order (ascending).

2. Complexity Verification

Let N be the number of words in the input array words, and L be the maximum length of a word. U is the number of unique words in words.

  • Time Complexity:

    • Stated: O(n log n)
    • Correct Complexity: O(N * L + U log U * L) which simplifies to O(N * L + N log N * L) in the worst case (where U can be up to N).
      • Why:
        • Frequency Map Creation: Iterating through N words. For each word, freqMap.get() and freqMap.set() operations on strings of length up to L take O(L) time due to string hashing and comparison. So, this step is O(N * L).
        • Converting Map to Array: Object.entries(freqMap) takes O(U) time, where U is the number of unique words.
        • Sorting: The sort method on an array of U entries takes O(U log U) comparisons. Each comparison involves:
          • Checking frequencies: O(1).
          • If frequencies are equal, comparing two strings of length up to L using localeCompare(), which takes O(L) time.
          • Therefore, the sorting step is O(U log U * L).
        • Extracting k results: This loop runs k times, pushing a word (string) to the result array. This is O(K) operations.
    • Conclusion: The stated O(n log n) is generally considered accurate in competitive programming contexts where L is small (e.g., L <= 10 as per constraints) and treated as a constant. However, for a strict analysis, the L factor is important for string operations. The solution is O(N log N) if we consider string operations to be O(1), but more precisely O(N log N * L) if L is part of the input size.
  • Space Complexity:

    • Stated: O(n)
    • Correct Complexity: O(U * L) which simplifies to O(N * L) in the worst case.
      • Why:
        • freqMap: Stores up to U unique words. Each word has an average length of L. So, the map requires O(U * L) space.
        • sortedMap: A new array created from freqMap entries, also storing U words and their counts. This takes O(U * L) space.
        • result: Stores k words, taking O(K * L) space.
    • Conclusion: Similar to time complexity, the stated O(n) is often used when L is treated as a constant, referring to the number of unique words. Strictly, it's O(N * L).

3. Key Insight

The key insight is the combination of:

  1. Efficient Frequency Counting: Using a hash map (Map in JavaScript) to count word occurrences in O(N * L) time. This is much faster than repeatedly searching or sorting the original array.
  2. Custom Multi-Criteria Sorting: The problem requires a specific sorting order:
    • Primary sort: By frequency, in descending order (highest frequency first).
    • Secondary sort: For words with the same frequency, by lexicographical order, in ascending order (alphabetical).
      The custom comparison function (a, b) => { ... } elegantly handles both conditions by first comparing frequencies, and only if they are equal, comparing the words lexicographically.

4. Edge Case Analysis

The current solution correctly handles the following edge cases due to its robust approach:

  • k = 1: Returns the single most frequent word.
  • All words are unique: Frequencies are all 1. The solution will sort them purely by lexicographical order and return the first k.
  • All words are the same: E.g., words = ["apple", "apple", "apple"], k = 1. The map will have {"apple": 3}, it will be sorted (trivially), and ["apple"] will be returned.
  • words.length = 1: E.g., words = ["test"], k = 1. The map will have {"test": 1}, it will be sorted, and ["test"] will be returned.
  • Constraints: k is guaranteed to be in the range [1, The number of unique words], meaning k will always be a valid number of elements to retrieve. Word lengths are small (<= 10), and words consist of lowercase English letters, which simplifies string comparisons.

5. Learning Points

  • Similar Problems:
    • Top K Frequent Elements (LeetCode 347): Similar concept but with numbers instead of strings, simplifying comparison.
    • Sort Characters By Frequency (LeetCode 451): Another variation where characters are sorted by frequency, often requiring a custom sort or bucket sort.
    • Kth Largest Element in an Array (LeetCode 215): Focuses on finding a single element at a specific rank rather than the top K.
    • Any problem requiring you to find the "most X" or "least Y" often starts with frequency counting.
  • Common Mistakes:
    • Incorrect Sorting Logic: Forgetting to handle both primary and secondary sort criteria, or getting the ascending/descending order wrong for one of them.
    • Inefficient Frequency Counting: Using nested loops or indexOf repeatedly instead of a hash map for frequency counting, leading to O(N^2) or O(N*L*N) time complexity.
    • Off-by-one Errors with k: Incorrectly slicing or iterating to get exactly k elements.
  • Variations of this Problem:
    • Find the least frequent words.
    • Sort by word length first, then frequency, then lexicographically.
    • Handle case-insensitivity or punctuation.
    • Return words in a specific order (e.g., reversed alphabetical if frequencies are tied).

6. Code Quality

The code quality is good:

  • Variable Naming: freqMap, sortedMap, result, word, a, b are all clear and descriptive.
  • Code Structure: The logic flows naturally: count frequencies, sort, then extract.
  • Readability: The custom sort function is concise and easy to understand.
  • Modern JavaScript: Uses const for variables that aren't reassigned, Map for efficient key-value storage, and localeCompare for robust string comparison.
  • The addition of export to the function signature is a good practice for modularity and testing.
  • The // @time and // @space comments are helpful, though as discussed, could be more precise regarding L.

7. Alternative Approaches

The problem has a follow-up asking for O(N log K) time and O(N) extra space, which this O(N log N) solution doesn't meet. This hints at more advanced data structures.

  1. Min-Heap (Priority Queue):

    • Approach: First, count word frequencies using a Map (same as current solution: O(N * L) time, O(U * L) space). Then, iterate through the Map's entries. For each entry [word, frequency], add it to a min-heap (priority queue). The heap should maintain at most k elements. When adding an element, if the heap size exceeds k, remove the smallest element. The "smallest" in this context is defined by the problem's sorting criteria: lower frequency is smaller, or if frequencies are equal, a lexicographically larger word is smaller (because we want to keep the k most frequent, lexicographically smallest words).
    • Trade-offs:
      • Time Complexity: O(N * L) for frequency counting. O(U * log K * L) for heap operations (up to U insertions, each log K and O(L) for comparison). Total: O(N * L + U log K * L), which simplifies to O(N log K * L) if U is roughly N. This satisfies the O(N log K) follow-up (ignoring L).
      • Space Complexity: O(U * L) for the frequency map and O(K * L) for the heap. Total: O(U * L), satisfying the O(N) follow-up (ignoring L).
      • Complexity: Requires implementing or using a custom heap data structure, which is more complex than a simple sort.
  2. Bucket Sort (or Frequency Buckets):

    • Approach: Count word frequencies using a Map (O(N * L) time, O(U * L) space). Then, create an array of lists (buckets), where buckets[i] stores all words that appear i times. Iterate through the buckets array from the highest possible frequency (which is N) down to 1. For each bucket, sort the words within that bucket lexicographically (O(M log M * L) where M is the number of words in that bucket). Collect words until k words are found.
    • Trade-offs:
      • Time Complexity: O(N * L) for frequency counting. O(N) for placing words into buckets. The sorting within buckets can still be O(N log N * L) in the worst case (e.g., all words have the same frequency and end up in one bucket). However, if frequencies are well-distributed, it can be faster.
      • Space Complexity: O(U * L) for the frequency map and O(U * L) for the buckets. Total: O(U * L).
      • Complexity: Can be simpler to implement than a heap if you're comfortable with array manipulation and local sorting.

@github-actions
Copy link

github-actions bot commented Jan 5, 2026


🤖 Gemini Code Review (Model: gemini-2.5-flash)

Here's a detailed review of the provided LeetCode solution for "Top K Frequent Words":

Review: Top K Frequent Words

1. Pattern Identification

This solution primarily uses the following algorithmic patterns:

  • Frequency Counting (Hash Map/Dictionary): To efficiently count the occurrences of each word in the input array. This is a fundamental technique for problems involving counts or unique elements.
  • Custom Sorting: After counting frequencies, the solution sorts the words based on a custom comparison logic that prioritizes frequency (descending) and then lexicographical order (ascending) for tie-breaking.
  • Top K Selection: After sorting, it simply takes the first k elements, which are guaranteed to be the top k most frequent words according to the specified criteria.

2. Complexity Verification

  • Stated Time Complexity: O(n log n) where n is words

    • Accuracy: This is generally accurate. Let's break it down more precisely:
      • Frequency Map Creation: Iterating through words (length N) and updating the freqMap. Each Map.set and Map.get operation on a string of length L takes O(L) time (due to string hashing and comparison). So, this step is O(N * L_avg), where L_avg is the average length of words. Given the constraint words[i].length <= 10, L_avg is a small constant, so this part is effectively O(N).
      • Converting Map to Array: Array.from(freqMap.entries()) takes O(M) time, where M is the number of unique words (M <= N).
      • Sorting the Array: Sorting M entries. Each comparison in the sort callback involves comparing two frequencies (constant time) and potentially two words using localeCompare (which takes O(L_avg) time). Therefore, the sorting step is O(M * L_avg * log M).
      • Extracting Top K: Iterating k times and pushing to result takes O(K) time.
    • Correct Complexity: O(N * L_avg + M * L_avg * log M). Since M <= N and L_avg is a small constant, this simplifies to O(N + N log N) or O(N log N). The stated O(n log n) (where n refers to words.length) is a reasonable approximation and commonly accepted for this approach in LeetCode contexts when string length is small.
  • Stated Space Complexity: O(n) n is each item in the Map

    • Accuracy: This is also generally accurate.
      • freqMap: Stores up to M unique words. Each word has an average length L_avg. So, the space is O(M * L_avg).
      • sortedMap: Stores the same M entries, so O(M * L_avg).
      • result: Stores k words, so O(K * L_avg).
    • Correct Complexity: O(M * L_avg). Since M <= N, this can be expressed as O(N * L_avg). The stated O(n) (where n refers to the number of unique words, and L_avg is treated as constant) is a common simplification.

3. Key Insight

The key insight for this solution is recognizing that a custom sorting function can elegantly handle the multi-criteria sorting requirement. Instead of trying to sort in multiple passes or complex data structures, one can define a single comparison logic:

  1. Primary Sort Key: Compare by frequency. Higher frequency comes first (descending order).
  2. Secondary Sort Key (Tie-breaker): If frequencies are equal, compare by lexicographical order. Alphabetically smaller words come first (ascending order).

This allows Array.prototype.sort to do the heavy lifting with a concise and readable comparator.

4. Edge Case Analysis

The solution handles the specified edge cases well:

  • Empty words array: The problem constraints state 1 <= words.length, so an empty array is not possible.
  • Single word: words = ["apple"], k = 1. freqMap will be {"apple": 1}, sortedMap [["apple", 1]], and result ["apple"]. Correct.
  • k = 1: Works correctly, picking only the single most frequent word.
  • k = number of unique words: Works correctly, returning all unique words sorted by frequency then lexicographically.
  • All words are the same: words = ["foo", "foo", "foo"], k = 1. freqMap will be {"foo": 3}, sortedMap [["foo", 3]], and result ["foo"]. Correct.
  • All words are unique: words = ["b", "a", "c"], k = 2. freqMap will be {"b": 1, "a": 1, "c": 1}. When sorted, localeCompare will ensure alphabetical order for equal frequencies, resulting in [["a", 1], ["b", 1], ["c", 1]] (or similar, depending on initial order before stable sort). The first k will be ["a", "b"]. Correct.
  • Words with same frequency, different lexicographical order: This is explicitly handled by wordA.localeCompare(wordB), which is the core of the tie-breaking logic. For example, ["i", "love", "i", "love"] with k=2 will correctly yield ["i", "love"] because i comes before love alphabetically.

5. Learning Points

  • Similar problems:
    • "Top K Frequent Elements" (LeetCode 347): This is a classic "Top K" problem, often solved using a min-heap (priority queue) to achieve O(N log K) time complexity, which is more efficient than O(N log N) when K is much smaller than N.
    • "Sort Characters By Frequency" (LeetCode 451): Another frequency-based sorting problem, but for characters within a single string.
    • Problems requiring custom comparators: Many problems benefit from custom sorting logic for objects or complex data types.
  • Common mistakes:
    • Forgetting tie-breaking conditions: Many "Top K" problems have secondary sorting criteria (like lexicographical order here). Failing to implement this correctly is a common error.
    • Incorrect sort order: Mixing up ascending/descending logic for frequencies or lexicographical order. For descending frequency, freqB - freqA is correct. For ascending lexicographical, wordA.localeCompare(wordB) is correct.
    • Inefficient frequency counting: Repeatedly iterating the array or using inefficient data structures instead of a hash map.
  • Variations:
    • O(N log K) time complexity: This is often a follow-up for "Top K" problems and typically requires a min-heap (priority queue). For this problem, the heap would store [word, frequency] pairs, ordered by frequency (min-heap) and then reverse lexicographical order for tie-breaking.
    • Least K Frequent Words: Simply reverse the frequency comparison (e.g., freqA - freqB).
    • Different tie-breaking: Sort by length, or reverse lexicographical order.

6. Code Quality

The code quality is very good:

  • Variable Naming: Variables like freqMap, sortedMap, result, word, freqA, freqB, wordA, wordB are all descriptive and enhance readability.
  • Code Structure: The solution is logically divided into three clear steps: frequency counting, sorting, and result extraction.
  • Readability: The code is easy to follow and understand. The custom sort function is well-defined.
  • TypeScript Best Practices:
    • const is used appropriately for variables that are not reassigned.
    • Type annotations (Map<string, number>, string[]) are used, improving type safety and clarity.
    • The || 0 pattern for freqMap.get(word) is a standard and clean way to handle initial map entries.
    • The export keyword is correctly added, indicating it's part of a module.

Minor Improvement Suggestion:

The final loop to extract k words could be made more functional for conciseness, though the current loop is perfectly clear and efficient.

// Current approach:
const result: string[] = [];
for (let i = 0; i < k; i++) {
  result.push(sortedMap[i][0]);
}
return result;

// Alternative (more functional):
return sortedMap.slice(0, k).map(([word]) => word);

Both are valid; the functional approach is often preferred in modern JavaScript/TypeScript for its expressiveness, but the loop is fine.

7. Alternative Approaches

  1. Min-Heap (Priority Queue) for O(N log K) Time Complexity:

    • Approach:
      1. Count word frequencies using a Map (same as current solution: O(N*L) time, O(M*L) space).
      2. Initialize a min-heap (priority queue) of size k. The heap's comparison logic would be:
        • Prioritize lower frequency (min-heap).
        • If frequencies are equal, prioritize higher lexicographical order (so that if "apple" and "apricot" both have freq 1, and "apricot" is already in the heap, "apple" (smaller) would replace it if the heap is full).
      3. Iterate through the freqMap entries. For each [word, freq] pair:
        • Add it to the heap.
        • If the heap's size exceeds k, remove the top element (which will be the word with the lowest frequency, or highest lexicographical order among equal frequencies).
      4. After processing all words, the heap will contain the k most frequent words. Extract them and sort them (if not already sorted by extraction) to meet the final output order requirement.
    • Trade-offs:
      • Time Complexity: O(N*L + M log K). This is the solution for the "Follow-up" and is more efficient than O(N log N) when K is significantly smaller than M (number of unique words).
      • Space Complexity: O(M*L + K*L).
      • Implementation Complexity: Requires implementing a custom min-heap data structure in JavaScript/TypeScript, which adds significant boilerplate code.
  2. Bucket Sort (Frequency Buckets):

    • Approach:
      1. Count word frequencies using a Map (O(N*L) time, O(M*L) space).
      2. Create an array of lists (buckets), where buckets[i] stores all words that appear i times. The maximum frequency can be N. So, buckets array size is N+1.
      3. Iterate through the freqMap. For each [word, freq] pair, add word to buckets[freq].
      4. Iterate through the buckets array from the highest frequency index (N) down to 1.
      5. For each bucket buckets[i], if it contains words, sort these words lexicographically (O(V * L log V) where V is the number of words in that bucket).
      6. Add the sorted words from buckets[i] to the result array until k words are collected.
    • Trade-offs:
      • Time Complexity: O(N*L + M + N_max_freq * L + Sum(V_i * L log V_i)). N_max_freq is the maximum frequency (can be up to N). The sorting within each bucket can dominate if many words have the same frequency. In the worst case (all words have the same frequency), it degrades to O(M * L log M).
      • Space Complexity: O(M*L + N*L) for the frequency map and the buckets array.
      • Suitability: This approach can be very efficient if the maximum frequency is relatively small, making the bucket array sparse or small. However, when frequencies can be as large as N, it can be less predictable than a straightforward sort or heap.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts (1)

1-4: Document the algorithmic pattern used.

The complexity analysis is present, but the solution should also document the algorithmic pattern (Hash Table + Sorting) as per coding guidelines.

🔎 Proposed enhancement
 /**
+ * Pattern: Hash Table + Sorting
  * @time - O(n log n) where n is words
  * @space - O(n) n is each item in the Map
  */

Based on coding guidelines: Document the algorithmic pattern used in solution code comments.

🧹 Nitpick comments (1)
leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts (1)

12-12: Consider adding explicit type annotation.

While TypeScript can infer the type, explicit type annotations improve code clarity and align with repository standards.

🔎 Proposed enhancement
-  const result = [];
+  const result: string[] = [];

Based on coding guidelines: Use explicit type annotations for clarity in TypeScript solutions.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a0c2a5 and cbef2af.

📒 Files selected for processing (4)
  • 0692-top-k-frequent-words/README.md
  • README.md
  • leetcode/medium/0692-top-k-frequent-words/README.md
  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
💤 Files with no reviewable changes (1)
  • 0692-top-k-frequent-words/README.md
🧰 Additional context used
📓 Path-based instructions (12)
leetcode/**/README.md

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Problem README files should include: problem title with difficulty badge, problem description, examples with inputs and outputs, and constraints

Keep README files up to date with problem counts and organization

Files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
leetcode/**

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Don't require tests for all LeetCode solutions (validated on platform)

Files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
**/README.md

📄 CodeRabbit inference engine (CLAUDE.md)

Include problem title and difficulty badge, link to the problem on the respective platform (LeetCode, GreatFrontEnd, etc.), problem description, examples and constraints

Files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
  • README.md
**/*.ts

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.ts: Use explicit type annotations for function parameters and return types
Follow strict TypeScript configuration (defined in tsconfig.json)
Target ES2020 with CommonJS modules
Strict mode enabled with ES2020 target and CommonJS modules for TypeScript

**/*.ts: Use explicit type annotations for function parameters and return types in TypeScript solutions
Follow strict TypeScript configuration with ES2020 target and CommonJS modules
Avoid any type unless absolutely necessary in TypeScript solutions
Use interfaces for object types in TypeScript solutions

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
**/*.{ts,js}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,js}: Test complex algorithms with helper functions, data structure implementations, solutions with multiple edge cases, GreatFrontEnd API implementations, and reusable utility functions
Use descriptive variable names (single letters like i, j, k are acceptable for loop iterators and standard mathematical contexts)
Don't use deprecated JavaScript features

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
**/*-*.{ts,js,md}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use kebab-case for file names matching problem names

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
leetcode/**/*.{ts,js}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Don't deviate from LeetCode's expected function signatures

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
**/*.{js,ts}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{js,ts}: Verify the solution correctly solves the problem for all test cases, including edge cases: empty arrays, single elements, negative numbers, null values
Document and verify time and space complexity using Big O notation, typically in code comments or docs/analysis/ files
Use clear, descriptive variable names (not just i, j, k unless in simple loops) and add comments for non-obvious logic or algorithm steps
Use const by default, let only when reassignment is needed; avoid var except for LeetCode solution function definitions
Use strict equality (===) over loose equality (==)
Use modern ES6+ syntax (arrow functions, destructuring, spread operators) where appropriate and prefer built-in methods (.map(), .filter(), .reduce())

**/*.{js,ts}: Use const by default, let only when reassignment is needed in JavaScript/TypeScript
Use strict equality (===) over loose equality (==) in JavaScript/TypeScript
Use descriptive variable names; single letters like i, j, k acceptable only for standard loop contexts
Prefer built-in array methods (.map(), .filter(), .reduce()) in JavaScript/TypeScript
Fix ESLint errors with npm run quality:fix and ensure proper code formatting with Prettier

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
leetcode/**/*.{js,ts}

📄 CodeRabbit inference engine (AGENTS.md)

leetcode/**/*.{js,ts}: Keep LeetCode solutions concise and focused on the algorithm
Always document both time and space complexity in Big O notation with clear reasoning
Explain WHY the complexity is what it is, including amortized analysis when applicable
Document the algorithmic pattern used in solution code comments
Include examples in comments for complex algorithms in solution files
Review complexity analysis reports for algorithm files after running quality checks
Test edge cases if adding or modifying solution code
Do not add unnecessary dependencies or imports to solutions
Do not over-engineer simple solutions
Do not ignore time/space complexity considerations in code reviews
Do not deviate from LeetCode's expected function signatures in solution files

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
leetcode/**/*-*.{js,ts}

📄 CodeRabbit inference engine (AGENTS.md)

Solution files should use kebab-case naming: {problem-name}.{js|ts}

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
leetcode/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use TypeScript type annotations for all parameters and return types with complexity analysis in code comments

Files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
README.md

📄 CodeRabbit inference engine (AGENTS.md)

Update README.md when adding new solutions

Files:

  • README.md
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/README.md : Keep README files up to date with problem counts and organization
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/*/*/ : Ensure problems are in correct difficulty folders (easy, medium, hard) in the LeetCode directory
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.js : Add JSDoc comments explaining key logic and decisions in JavaScript solution files
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.js : Use JSDoc comments for function signatures in JavaScript solutions
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.{js,ts} : Keep LeetCode solutions concise and focused on the algorithm
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/[0-9][0-9][0-9][0-9]-* : Problem solution folders should follow naming convention: `{difficulty}/{4-digit-number}-{problem-name}` in kebab-case
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.{js,ts} : Do not deviate from LeetCode's expected function signatures in solution files
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to README.md : Update README.md when adding new solutions
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.{js,ts} : Document the algorithmic pattern used in solution code comments
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-12-30T13:41:07.646Z
Learning: Applies to leetcode/**/README.md : Problem README files should include: problem title with difficulty badge, problem description, examples with inputs and outputs, and constraints
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/README.md : Keep README files up to date with problem counts and organization

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
  • README.md
📚 Learning: 2025-12-30T13:41:07.646Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-12-30T13:41:07.646Z
Learning: Applies to leetcode/**/README.md : Problem README files should include: problem title with difficulty badge, problem description, examples with inputs and outputs, and constraints

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
📚 Learning: 2025-12-30T13:41:24.633Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-30T13:41:24.633Z
Learning: Applies to **/README.md : Include problem title and difficulty badge, link to the problem on the respective platform (LeetCode, GreatFrontEnd, etc.), problem description, examples and constraints

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
  • README.md
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to README.md : Update README.md when adding new solutions

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.js : Add JSDoc comments explaining key logic and decisions in JavaScript solution files

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/README.md
📚 Learning: 2025-12-30T13:41:07.646Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-12-30T13:41:07.646Z
Learning: Applies to leetcode/**/*.{ts,js} : Don't deviate from LeetCode's expected function signatures

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/**/*.{js,ts} : Do not deviate from LeetCode's expected function signatures in solution files

Applied to files:

  • leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Do not require tests for all LeetCode solutions (validated on platform)

Applied to files:

  • README.md
📚 Learning: 2025-12-30T13:41:07.646Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-12-30T13:41:07.646Z
Learning: Applies to leetcode/** : Don't require tests for all LeetCode solutions (validated on platform)

Applied to files:

  • README.md
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to leetcode/*/*/ : Ensure problems are in correct difficulty folders (easy, medium, hard) in the LeetCode directory

Applied to files:

  • README.md
📚 Learning: 2025-12-30T13:41:58.608Z
Learnt from: CR
Repo: pertrai1/coding-challenges PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-30T13:41:58.608Z
Learning: Applies to docs/**/*analysis.md : Add badges for difficulty levels in problem documentation: `![Easy](https://img.shields.io/badge/Easy-green)`

Applied to files:

  • README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Agent
  • GitHub Check: review
🔇 Additional comments (3)
leetcode/medium/0692-top-k-frequent-words/README.md (1)

1-33: LGTM! Complete problem documentation.

The README follows all required guidelines with title, difficulty badge, LeetCode link, problem description, examples, and constraints.

Based on coding guidelines: Problem README files should include problem title with difficulty badge, problem description, examples, and constraints.

README.md (1)

9-9: LGTM! Problem counts and entry correctly updated.

All counts are accurately incremented (+1 total, +1 Medium), and the new problem entry is properly added to the String Manipulation section.

Based on learnings: Update README.md when adding new solutions and keep README files up to date with problem counts.

Also applies to: 21-21, 185-185, 233-233

leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts (1)

5-5: LGTM! Export keyword added correctly.

The export keyword makes the function available for imports without deviating from LeetCode's expected signature.

Based on learnings: Don't deviate from LeetCode's expected function signatures (export keyword is acceptable).

Also applies to: 17-19

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new LeetCode medium problem solution: #692 Top K Frequent Words. The solution implements a frequency counting approach using a HashMap and sorting, with proper TypeScript type annotations and export syntax.

Key changes:

  • Adds TypeScript solution with O(n log n) time complexity using frequency map and sorting
  • Reorganizes file structure by moving README from incorrect location to proper leetcode/medium/0692-top-k-frequent-words/ directory
  • Updates repository statistics (169 total problems, 75 medium problems)

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated no comments.

File Description
leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts Adds export keyword to function and cleans up trailing whitespace and semicolon formatting
leetcode/medium/0692-top-k-frequent-words/README.md Adds clean Markdown-formatted problem description with examples, constraints, and follow-up question
README.md Updates problem counts (168→169 total, 74→75 medium) and adds problem to String Manipulation section
0692-top-k-frequent-words/README.md Removes old README from incorrect directory location
Comments suppressed due to low confidence (1)

leetcode/medium/0692-top-k-frequent-words/top-k-frequent-words.ts:3

  • The complexity analysis could be more precise. The time complexity is O(n + m log m) where n is the total number of words and m is the number of unique words. While it simplifies to O(n log n) in the worst case (all words unique), the current description "where n is words" is ambiguous. Consider clarifying:
  • @time - O(n + m log m) where n is the length of the input array and m is the number of unique words; worst case O(n log n) when all words are unique

Also note that the problem's follow-up asks for O(n log k) time complexity using a min-heap approach, which this solution doesn't achieve.

@pertrai1 pertrai1 merged commit fe60ea9 into main Jan 6, 2026
16 of 17 checks passed
@pertrai1 pertrai1 deleted the leetcode-01052025 branch January 6, 2026 01:49
@github-actions
Copy link

github-actions bot commented Jan 6, 2026

📅 Spaced Repetition Reviews Scheduled!

Great job solving #0692 - Top K Frequent Words! 🎉

To help you retain this knowledge long-term, I've scheduled 5 review sessions using spaced repetition:

Review Interval Schedule Logic
1st Review 1 day after solving Scheduled now
2nd Review 3 days after 1st review Auto-scheduled when 1st completes
3rd Review 7 days after 2nd review Auto-scheduled when 2nd completes
4th Review 14 days after 3rd review Auto-scheduled when 3rd completes
5th Review 30 days after 4th review Auto-scheduled when 4th completes

What to expect:

  • Your 1st review is scheduled for tomorrow
  • Each subsequent review is scheduled automatically when you complete the previous one
  • This ensures proper spacing even if you complete a review a few days late
  • GitHub issues will be created automatically for each review
  • Each issue will link back to your solution

🧠 Why Spaced Repetition?

Research shows that reviewing material at increasing intervals dramatically improves retention. This adaptive scheduling ensures optimal spacing based on when you actually complete each review!

Check docs/reviews/review-schedule.json to see your review schedule.

github-actions bot pushed a commit that referenced this pull request Jan 6, 2026
  Problem: #0692 - Top K Frequent Words
  PR: #156
  First review scheduled (subsequent reviews auto-scheduled on completion)

  [skip ci]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

code challenge this is a coding challenge

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants