Kanishk Sachdev

Software Engineer and Student

Picture this: it's 10:45 AM, 15 minutes before the opening ceremony of HackPSU, and the organizers are frantically trying to find a missing HDMI cable for a presentation and some of us are standing in our War Room playing "guess which cable is where" with a pile of unmarked boxes. Meanwhile, we have a box of 12 cables that were sitting in an unlabeled box in our UHaul.

That was a problem I never wanted to deal with again. So we decided to fix it.

Building an inventory system for a hackathon isn't like managing a warehouse. Everything happens at once, nothing goes where it's supposed to, and your "database" better work when someone's frantically scanning barcodes with a cracked phone screen at 3 AM.

Here's the story of how we built HackPSU's first real inventory system, and all the weird edge cases we discovered along the way.


The Problem: Organized Chaos at Scale

HackPSU isn't just one big room anymore. We've got:

  • Main Venue with 50+ hacking tables (laptops, monitors, cables everywhere)
  • Registration Desk (thousands of swag items, check‑in hardware)
  • Sponsor and Entertainment Booths (demo equipment that costs more than my car)
  • Workshop Rooms (A/V gear that somehow always grows legs and walks away)
  • Storage Areas (the black hole where we always have "that one thing" but can never find it)
  • Mobile Teams (organizers running around with tote bags full of "just in case" items)

Our previous system? A Google Sheet and a prayer. You know how that went.

Items would vanish into backpacks. We'd buy duplicates because nobody remembered what we already had. And post‑event cleanup was basically inventory archaeology—digging through boxes to figure out what we had left.

We needed something that could track a thousand moving pieces across a dozen locations, work on phones, and not break when forty people tried to use it simultaneously.


Architecture Decisions: Learning from Last Year's Mistakes

Remember that Google Sheet I mentioned? It blocked edits spectacularly during the event when 40 people tried editing it simultaneously. So this time, we needed something that could handle real concurrency.

What we actually built (after several architectural arguments)

Backend Choices (and why we made them):

  • NestJS our API already used it, so we stuck with it and extended functionality
  • MySQL kind of the same reason as above, plus we needed transactions for inventory movements
  • Firebase Auth because we already had HackPSU accounts integrated with it

Frontend Stack (optimized for chaos):

  • Next.js 15 with App Router—server components are perfect for mobile data usage
  • React Query for optimistic updates (because waiting for network responses during event crunch time can lead to chaos)
  • Tailwind + Radix UI so we could build fast without making it look like a GeoCities site

The key insight: everything had to work offline-first. Event WiFi when 1000 hackers are trying to stream cat videos is... well, let's just say it's not reliable.


Data Model: The "Where's My Stuff" Problem

Here's where things got interesting. The obvious approach would be to just store items and their current locations. But hackathons are messy—items get passed around, borrowed, forgotten, and sometimes sacrificed to the hackathon gods.

We needed to answer not just "where is X?" but also "who moved it there?" and "what happened to it along the way?" Think Git for physical objects.

// What an item looks like in our system
interface InventoryItem {
  id: string // nanoid because UUIDs are ugly in URLs
  name: string
  categoryId: number
  assetTag?: string // The barcode we stick on everything
  serialNumber?: string // For the expensive stuff we actually care about
  status: 'active' | 'checked_out' | 'lost' | 'disposed' | 'archived'

  // The "current state" fields (denormalized for speed)
  holderLocationId?: number // Where it is right now
  holderOrganizerId?: string // Who has it right now

  createdAt: number
  updatedAt: number
}

// Every time something moves, we record it
interface InventoryMovement {
  id: string
  itemId: string
  reason:
    | 'checkout'
    | 'return'
    | 'transfer'
    | 'repair'
    | 'lost'
    | 'disposed'
    | 'other'

  fromLocationId?: number // Where it came from
  fromOrganizerId?: string // Who had it before
  toLocationId?: number // Where it's going
  toOrganizerId?: string // Who's taking it

  notes?: string // "This RPi cannot be used for network projects"
  movedByOrganizerId: string // Who actually performed the move
  createdAt: number
}

The Design Decisions That Saved Us:

1. Denormalized Current State We store the current location directly on the item instead of calculating it from movements. Why? Because "where is this item right now?" is asked about 1000x more often than "show me the movement history."

2. Flexible Movement Reasons Instead of rigid checkout/return flows, we made movements flexible. Sometimes an item gets "transferred" between locations. Sometimes it's sent for "repair." Sometimes it's just "other" because hackathons are weird.

3. Status Follows Logic When someone marks an item as "lost," it automatically clears the holder fields. When they "return" something, it goes back to "active." The API enforces these rules so the frontend doesn't have to think about it.


API Design: Making the Impossible Possible to Screw Up

You know what's fun? Watching someone try to "return" an item that was never checked out in the first place. Or better yet, watching them move an item from Location A when it's actually at Location B.

We learned this the hard way during testing. Organizers will find ways to break your system that you never imagined. So we built validation that's so paranoid it made us imagine how we would ever see some of these edge cases.

@Post("movements")
async createMovement(@Body() dto: CreateMovementDto, @Req() req: Request) {
  const moverId = (req.user as any)?.sub;
  const item = await this.itemRepo.findOne(dto.itemId).exec();

  // Paranoid validation: if they say it's coming FROM somewhere,
  // it better actually BE there
  if (dto.fromLocationId !== undefined &&
      item.holderLocationId !== dto.fromLocationId) {
    throw new BadRequestException(
      `Item is not at the specified location. ` +
      `Current: ${item.holderLocationId}, Expected: ${dto.fromLocationId}`
    );
  }

  // Business logic: you can't return something that's not checked out
  if (dto.reason === "return" && item.status !== "checked_out") {
    throw new BadRequestException(
      "Cannot return an item that is not checked out"
    );
  }

  // The magic: update both the movement log AND the item state
  const movement = await this.moveRepo.createOne({
    id: nanoid(36),
    createdAt: Date.now(),
    movedByOrganizerId: moverId,
    ...dto,
  }).exec();

  // Status logic that took us way too long to get right
  switch (dto.reason) {
    case "checkout":
      item.status = "checked_out";
      break;
    case "return":
      item.status = "active";
      break;
    case "lost":
      item.status = "lost";
      // Lost items don't belong to anyone
      item.holderLocationId = null;
      item.holderOrganizerId = null;
      break;
    case "disposed":
      item.status = "disposed";
      // Disposed items also don't belong to anyone
      item.holderLocationId = null;
      item.holderOrganizerId = null;
      break;
    // ... you get the idea
  }

  await this.itemRepo.patchOne(item.id, item).exec();
  return movement;
}

The Validation Philosophy:

  • Trust but verify everything: if someone says an item is at Location X, we check
  • Fail fast and loudly: better to throw an error than silently corrupt data
  • Atomic updates: movement creation and item updates happen together or not at all

The result? Organizers can click buttons confidently without worrying about breaking the inventory state. The API handles the complexity so they don't have to.


Frontend: When Network Requests Take Forever

Here's the thing about hackathon organizers: they want everything to feel instant. If they click a button and nothing happens for 2 seconds, they assume the app is broken. So we had to make sure our frontend felt snappy, even when the network was slow.

Our solution? Optimistic updates everywhere. When someone clicks "check out item," the UI updates immediately and fixes itself later if something went wrong.

// The hook that makes everything feel instant
function useCreateMovement() {
  const queryClient = useQueryClient()

  return useMutation({
    mutationFn: (movement: CreateMovementDto) =>
      inventoryApi.createMovement(movement),

    // The magic happens here: update the UI immediately
    onMutate: async (newMovement) => {
      // Cancel any in-flight queries
      await queryClient.cancelQueries({ queryKey: ['items'] })

      // Save current state in case we need to rollback
      const previousItems = queryClient.getQueryData(['items'])

      // Update the UI optimistically
      queryClient.setQueryData(['items'], (old: InventoryItem[]) =>
        old?.map((item) =>
          item.id === newMovement.itemId
            ? {
                ...item,
                status: getNewStatus(item.status, newMovement.reason),
                holderLocationId: newMovement.toLocationId,
                holderOrganizerId: newMovement.toOrganizerId,
              }
            : item,
        ),
      )

      return { previousItems }
    },

    // If the server says "nope," undo everything
    onError: (err, newMovement, context) => {
      queryClient.setQueryData(['items'], context?.previousItems)
      toast.error(`Failed: ${err.message}`)
    },

    // When the server finally responds, sync everything
    onSettled: () => {
      queryClient.invalidateQueries({ queryKey: ['items'] })
      queryClient.invalidateQueries({ queryKey: ['movements'] })
    },
  })
}

Why This Approach Works:

  • Instant feedback: users see changes immediately, even on slow networks
  • Graceful failures: if something goes wrong, we roll back and show an error
  • Eventual consistency: when the network catches up, everything syncs

PWA Features for Real-World Use:

  • Service Worker caches the app shell so it loads even when WiFi is completely dead
  • Web App Manifest lets people "install" it on their phones like a real app
  • Large touch targets because we know organizers will be using this on their phones.

QR Code Integration: When Buttons Just Aren't Fast Enough

Here's where things got interesting. During our first test run, organizers were spending way too much time typing asset tags on their phones. You know how it is—cramped fingers, autocorrect fighting you, and someone's always asking "wait, was that a 1 or an I?"

So we built an integrated barcode/QR scanner right into the form. But here's the fun part: we didn't just scan QR codes. We went full barcode nerd.

// The scanner that handles everything
<Scanner
  onScan={handleScanResult}
  onError={handleScanError}
  formats={[
    "qr_code",    // For URLs and structured data
    "code_128",   // The barcode we print on labels
    "code_39",    // Legacy equipment might have this
    "ean_13",     // Retail products
    "ean_8",      // Smaller retail items
  ]}
  constraints={{
    deviceId: selectedCameraId ? { exact: selectedCameraId } : undefined,
  }}
  components={{
    finder: true,  // That overlay box that helps you aim
    torch: true,   // Flashlight toggle for dim storage rooms
  }}
/>

The best part? We integrated barcode printing too. Because what's the point of a scanning system if you can't print labels for everything?

const handlePrintBarcode = () => {
  const raw = form.getValues('assetTag')?.trim()
  if (!raw) {
    toast.error('Enter or generate an asset tag first.')
    return
  }

  // Open a print window with exact 2x1 inch dimensions
  const w = window.open('', 'PRINT', 'width=600,height=400')

  // Generate CODE128 barcode using JsBarcode
  JsBarcode(svg, raw, {
    format: 'CODE128',
    displayValue: true, // Show the number below the barcode
    fontSize: 12,
    height: 40,
    margin: 0,
  })

  // Print and close
  setTimeout(() => {
    w.focus()
    w.print()
    w.close()
  }, 100)
}

The Camera Selector Problem

One detail that bit us: most phones have multiple cameras (front, back, sometimes more). And scanning a QR code on the ultra-wide camera is... not great.

So we built a camera selector that only shows actual cameras, not the "default" entry that doesn't work for scanning.

function CameraSelector({ selectedDeviceId, onDeviceChange }) {
  const devices = useDevices()

  // Filter for actual cameras (not "default" entries)
  const videoDevices = devices.filter(
    (device) => device.kind === 'videoinput' && device.deviceId !== 'default',
  )

  // Only show selector if multiple cameras available
  if (videoDevices.length <= 1) return null

  return (
    <Select value={selectedDeviceId} onValueChange={onDeviceChange}>
      {videoDevices.map((device, index) => (
        <SelectItem key={device.deviceId} value={device.deviceId}>
          {device.label || `Camera ${index + 1}`}
        </SelectItem>
      ))}
    </Select>
  )
}

Asset Tag Generation: Random but Readable

We also added a "Generate" button for asset tags. Nothing fancy, just 13 random digits. But we learned something interesting: completely random numbers are hard for humans to verify. So we format them with hyphens when displayed (even though we store them as plain numbers for CODE128 compatibility).

(Note: We did think about ean_13 barcodes, but they require specific formatting and checksums that were overkill for our needs. Plus, we wanted something simple that anyone could read.)

const generateAssetTag = () => {
  const number = Math.floor(Math.random() * 9999999999999)
    .toString()
    .padStart(13, '0')
  return number // Store as plain digits for barcode compatibility
}

Real-World Workflow:

  1. Create item → click "Generate" → asset tag appears
  2. Click "Print" → label prints on 2x1 inch sticker
  3. Stick label on item → physical world meets digital tracking
  4. Later, need to check out item? → scan label → instant form population

The scanning works in reverse too. Someone finds an unlabeled item? Scan any existing barcode (manufacturer, retail, whatever) and use that as the asset tag. The system doesn't care what the barcode contains—it just needs to be unique and scannable.


Analytics: Learning from Chaos

You know what's interesting? Watching how people actually use your system versus how you thought they'd use it. We added PostHog tracking everywhere to understand the real workflows.

// Track everything that might be useful later
function trackInventoryEvent(event: string, properties: object) {
  posthog.capture(`inventory_${event}`, {
    ...properties,
    timestamp: Date.now(),
    user_role: getCurrentUserRole(),
  })
}

// Some examples of what we learned to track
trackInventoryEvent('barcode_scan_failed', {
  error_type: 'camera_permission_denied',
  device_type: navigator.userAgent.includes('iPhone') ? 'ios' : 'android',
  retry_count: attemptNumber,
})

trackInventoryEvent('item_checkout', {
  scan_vs_manual: movement.scannedAssetTag ? 'scan' : 'manual',
  time_to_complete: Date.now() - formStartTime,
  location_popularity: getLocationRank(movement.toLocationId),
})

What We Discovered:

  • Popular Items: Cables, adapters, and power strips get checked out 5x more than anything else
  • Location Patterns: Storage areas are where items go to die (lowest return rate)
  • Peak Times: 80% of inventory activity happens in the first 6 hours and the last 2 hours of the event

The data completely changed how we stock events. Who knew people lose so many phone chargers?


Performance: When 40 People Hit Your Database Simultaneously

Here's something they don't teach you in CS classes: your beautiful, well-designed database queries become very ugly when forty people are all scanning barcodes at the same time.

Database Lessons Learned:

  • Composite Indexes on (itemId, createdAt) because movement history queries were killing us
  • Partial Indexes on status = 'active' because we don't care about archived items 99% of the time

Frontend Reality Checks:

  • React Query with aggressive caching because fetching the same item list 40 times per second is wasteful
  • Virtualized Lists because rendering 1000+ table rows crashes mobile browsers
  • Code Splitting because nobody wants to download the entire app to scan one barcode

The API Optimization That Saved Us: Bulk operations. Seriously. We went from "import items one by one" to "import 500 items in one request" and cut import time from 15 minutes to 30 seconds.

// Before: one API call per item (please don't do this)
for (const item of items) {
  await api.createItem(item) // 500 network requests
}

// After: one API call for everything
await api.bulkCreateItems(items) // 1 network request

Response Times:

  • Item lookups: under 200ms (fast enough for real-time scanning)
  • Movement creation: under 100ms (optimistic updates make this feel instant)
  • Bulk imports: 500 items in 3 seconds (acceptable for setup)

The key insight: optimize for the user experience, not the perfect technical solution.


Results: From Chaos to Control

The "Before Times":

  • Items disappeared into the void daily (usually found in someone's backpack three weeks later)
  • Thirty minutes minimum to hunt down a simple extension cord
  • Bought the same thing twice because nobody knew we already had it
  • Post‑event reconciliation was basically an archaeological expedition

After we deployed:

  • 100% item visibility—we always know where everything is, even if it's "checked out to Sarah, probably in the main venue somewhere"
  • Sub‑10 second lookups via barcode scanning (organizers started racing each other to see who could scan fastest)
  • 85% fewer lost items—turns out people return things when they know they're being tracked
  • Instant reconciliation—no more staying until 3 AM counting cables
  • 40+ organizers using it simultaneously during peak times without breaking a sweat

Technical Performance Numbers:

  • API Response Times: 95th percentile under 200ms (fast enough that the QR scanner feels instant)
  • Frontend Load Times: First Contentful Paint under 1.2s (mobile networks during events are... not great)
  • Mobile Experience: Lighthouse score of 95+ (because if it doesn't work on a phone, it doesn't work)
  • Uptime: 99.9% during the 48‑hour event (the 0.1% was a networking hiccup, not our code)

The real win? Organizers stopped asking "has anyone seen the..." in our Teams channels. That alone made the whole project worth it.


TL;DR: What We Actually Built

  • Real‑time tracking of 1000+ items without losing our sanity
  • Mobile‑first design because laptops don't fit in your pocket
  • QR code everything because typing asset tags on phones is torture
  • Optimistic updates so the app feels fast even when WiFi doesn't
  • Bulletproof validation because organizers will break your system in creative ways

The Real Impact: HackPSU went from "where the hell is that HDMI cable?" to "it's checked out to Sarah, probably in Workshop Room 2."

More importantly, we stopped losing stuff. Organizers could focus on helping participants instead of playing inventory detective. And post‑event cleanup went from an archaeological expedition to a simple database query.

What's Next: We're working on using the data we collected to improve future events. More accurate stock predictions, better item categorization, and maybe even ML suggestions for what to buy next time.

Lessons Learned: Build for chaos, not ideal conditions. Users will find ways to break your system that you never imagined. And when in doubt, add more validation.


Got your own inventory management horror stories? I'd love to hear them. Building software for real‑world chaos is one of my favorite challenges.

Share this post

Feel free to contact me at kanishksachdev@gmail.com