-
Notifications
You must be signed in to change notification settings - Fork 141
V3/feat/file upload #612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
V3/feat/file upload #612
Conversation
WalkthroughAdds MinIO/S3 infra and env vars, S3 clients, a file-upload API and upload helpers, replaces User.image with avatarFileId across schema/types, adds UI for changing profile picture (drawer, form, upload), updates user card to show avatar image, and adds translations and scripts/dependencies. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant UC as UserCard
participant D as ChangeProfilePictureDrawer
participant UP as Upload Hook
participant API as App API (update profile)
participant S3 as S3/MinIO
participant DB as Database
participant AUTH as Auth/Session
U->>UC: Click avatar
UC->>D: open drawer (state)
U->>D: choose file & submit
D->>UP: upload(file, meta:{userId,route:"avatar"})
UP->>S3: PUT object (userId/avatar.ext)
S3-->>UP: objectKey
UP-->>D: upload success / objectKey
D->>API: mutate { avatarFileId: objectKey }
API->>DB: update User.avatarFileId
DB-->>API: ok
API-->>D: success
D->>AUTH: refetch session
AUTH-->>D: updated session
D->>U: close drawer + toast success
UC->>UC: render AvatarImage(VITE_S3_BUCKET_PUBLIC_URL + objectKey)
sequenceDiagram
autonumber
participant DC as docker-compose
participant M as minio
participant MC as createbucket (mc)
note over DC: profile=init
DC->>M: start minio
M-->>DC: healthcheck ready
DC->>MC: run init script
MC->>M: configure alias, user, policy, create bucket, set public policy
MC-->>DC: "Bucket setup complete"
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Nitpick comments (22)
app/server/auth.tsx (1)
35-37
: Confirm nullability and exposure of profile picture fieldsPrisma has
profilePictureId String?
, but here the additional field is declared astype: 'string'
without an explicit optional/null config. If BetterAuth treats additional fields as required by default, this could break sign-up. Also, if the UI relies onuser.image
as the canonical URL, you may not need to exposeprofilePictureUrl
here; if you do, add it consistently.Would you like me to check BetterAuth’s expected shape for optional additional fields and propose a consistent mapping (id vs. URL)?
app/components/form/field-text/index.tsx (1)
19-21
: Optional: rename/component split for clarity
FieldText
now acceptstype="file"
. Consider a dedicatedFieldFile
to encapsulate accept/multiple/size validation and previews, leavingFieldText
for textual inputs.app/components/form/form-field-controller.tsx (1)
82-86
: Minor: group simple inputs togetherYou can fold
'file'
into the'text'|'email'|'tel'
group onceFieldText
handles the special case internally to keep the switch compact.- case 'text': - case 'email': - case 'tel': - return <FieldText {...props} />; + case 'text': + case 'email': + case 'tel': + case 'file': + return <FieldText {...props} />; @@ - case 'file': - return <FieldText {...props} />;Also applies to: 108-109
prisma/schema.prisma (1)
28-29
: Key strategy: prefer stableavatars/${user.id}
S3 keyPer prior learning, avatar keys should be
avatars/${user.id}
to overwrite on update (avoid storage bloat). If that’s the approach,profilePictureId
can be derived and may not need storing; otherwise, ensure lifecycle rules delete old keys.Would you like a migration plan to remove
profilePictureUrl
and rely onimage
+ deterministic keying?app/features/user/schema.ts (1)
28-29
: Type it consistently and prevent empty stringsConsider
zu.string.nonEmptyNullish(z.string())
forprofilePictureId
to avoid persisting empty values. If you keepprofilePictureUrl
in Prisma, expose it here asz.string().nullish()
or remove it from the schema to avoid confusion.- profilePictureId: z.string().nullish(), + profilePictureId: zu.string.nonEmptyNullish(z.string()), + // profilePictureUrl: z.string().nullish(), // if retained in Prisma and usedapp/locales/ar/account.json (1)
23-31
: LGTM: Arabic translations addedStrings are present for all new keys. If you standardize on “update” vs “change” across locales, align later.
app/locales/sw/account.json (1)
24-30
: Swahili phrasing/consistency tweaks (“update” vs “improve”).Prefer “sasisha/imesasishwa” to match existing copy (“changeNameDrawer”) and avoid “boreshwa” (improve).
"changeProfilePictureDrawer": { "title": "Badilisha avatar yako", "description": "Fomu ya kubadilisha avatar yako", "label": "Avatar", - "submitButton": "Badilisha", - "successMessage": "Avatar imeboreshwa", - "errorMessage": "Imeshindikana kuboresha avatar yako" + "submitButton": "Sasisha", + "successMessage": "Avatar imesasishwa", + "errorMessage": "Imeshindikana kusasisha avatar yako" }app/env/server.ts (1)
52-53
: Typo in comment (“prodution”).- // Check in prodution if the value is not REPLACE ME + // Check in production if the value is not REPLACE ME.env.example (1)
4-15
: dotenv-linter nits (quotes/order).To appease the linter: drop unnecessary quotes and reorder keys (non-functional).
package.json (1)
35-35
: Makedk:start
idempotent for first run
docker compose --profile dev start
fails if services weren’t created. Preferup -d
so it also creates them when needed.- "dk:start": "docker compose --profile dev start", + "dk:start": "docker compose --profile dev up -d",app/lib/object-storage/index.ts (1)
5-10
: LGTM; MinIO client correctly parameterizedLooks consistent with
better-upload
’s MinIO helper; credentials and endpoint come from validated env.If you ever point to AWS S3, ensure
region
is a real AWS region and consider exposingforcePathStyle
toggle (MinIO needs it; AWS doesn’t).app/features/account/schema.ts (1)
13-19
: Align validator helper for consistencyTo match the rest of the file, use
zu.string.nonEmpty
forprofilePicture
.export const zFormFieldsAccountUpdateProfilePicture = () => z.object({ - profilePicture: z.string().min(1), + profilePicture: zu.string.nonEmpty(z.string()), });Optional: name the field
profilePictureId
to reflect what the backend expects, and disable submit until an upload succeeds.app/features/account/user-card.tsx (1)
25-27
: Remove debug logExtraneous console log on button click; the drawer trigger already handles opening.
- onClick={() => { - console.log('update picture'); - }} + onClick={() => {}}Or drop the
onClick
entirely.</blockquote></details> <details> <summary>app/server/routers/account.tsx (2)</summary><blockquote> `69-71`: **Resolve TODO: return proper errors and/or separate RPC** Swallowing failures hides issues from the client. Either: - bubble up a 5xx/structured error, or - split avatar update into a dedicated RPC and return success/failure explicitly. I can refactor this into `account.updateAvatar` with clear error handling if helpful. --- `87-92`: **Avoid clearing name implicitly** `name: input.name ?? ''` will wipe the name if `name` is omitted. Prefer keeping the existing value if undefined. ```diff - name: input.name ?? '', + ...(typeof input.name === 'string' ? { name: input.name } : {}),
app/features/account/change-profile-picture-drawer.tsx (5)
42-47
: Use defaultValues instead of values to avoid controlled re-renders (esp. with ).react-hook-form recommends defaultValues for initial state; values turns the form into a fully controlled source-of-truth and can cause unexpected resets or warnings with file inputs.
- const form = useForm<FormFieldsAccountUpdateProfilePicture>({ - resolver: zodResolver(zFormFieldsAccountUpdateProfilePicture()), - values: { - profilePicture: '', - }, - }); + const form = useForm<FormFieldsAccountUpdateProfilePicture>({ + resolver: zodResolver(zFormFieldsAccountUpdateProfilePicture()), + defaultValues: { + profilePicture: '', + }, + });
83-96
: Preserve existing query params when toggling the drawer.Your navigate() overwrites the entire search object. Merge instead to avoid clobbering unrelated params.
- onOpenChange={(open) => { - form.reset(); - router.navigate({ - replace: true, - to: '.', - search: { - state: open ? 'change-profile-picture' : '', - }, - }); - }} + onOpenChange={(open) => { + form.reset(); + router.navigate({ + replace: true, + to: '.', + search: (prev) => ({ ...prev, state: open ? 'change-profile-picture' : '' }), + }); + }}
119-139
: Harden the file input UX: accept filter, size/type guard, avoid autofocus on mobile.
- Add accept="image/*" (or a stricter list).
- Enforce a size limit and show a friendly error.
- Autofocus on a file input can trigger the picker immediately; consider removing.
<FormFieldController control={form.control} type="file" name="profilePicture" size="lg" + accept="image/png,image/jpeg,image/webp" onChange={(e) => { - if (e.target.files?.[0]) { - upload(e.target.files[0], { + const f = e.target.files?.[0]; + if (f) { + const MAX = 5 * 1024 * 1024; // 5MB + if (!/^image\/(png|jpe?g|webp)$/.test(f.type) || f.size > MAX) { + form.setError('profilePicture', { message: t('account:changeProfilePictureDrawer.invalidFile') || 'Invalid file' }); + return; + } + upload(f, { metadata: { userId: session.data?.user.id, }, }); } }} - autoFocus />
142-149
: Disable the submit button until an upload key exists; keep loading state as-is.Prevents accidental empty submits.
<Button type="submit" className="w-full" size="lg" loading={updateUser.isPending || isPending} + disabled={!uploadedFile?.objectKey || updateUser.isPending || isPending} >
64-81
: Leverage server-side keying to avoid storage bloat.Ensure the upload route stores avatar at avatars/${user.id} (as per team convention) and update the returned key accordingly, so subsequent uploads overwrite the previous image.
This aligns with our prior choice to overwrite avatars to avoid orphaned objects. If the server already enforces this, great—just confirm the returned key matches the convention.
docker-compose.yml (2)
37-53
: Optional: avoid public bucket in dev by returning signed URLs.Granting anonymous download simplifies dev but normalizes a risky pattern. Consider removing anonymous access and using signed URLs even in dev.
- mc anonymous set download default/${S3_BUCKET_NAME}; + # Consider using presigned URLs instead of anonymous access in dev. + # mc anonymous set download default/${S3_BUCKET_NAME};
19-36
: Pin by digest for reproducibility.The floating tag may change over time. Pinning to a sha256 digest makes CI/dev deterministic.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (21)
.env.example
(2 hunks)app/components/form/field-text/index.tsx
(1 hunks)app/components/form/form-field-controller.tsx
(1 hunks)app/env/server.ts
(1 hunks)app/features/account/change-profile-picture-drawer.tsx
(1 hunks)app/features/account/schema.ts
(1 hunks)app/features/account/user-card.tsx
(2 hunks)app/features/user/schema.ts
(1 hunks)app/lib/object-storage/index.ts
(1 hunks)app/locales/ar/account.json
(1 hunks)app/locales/en/account.json
(1 hunks)app/locales/fr/account.json
(1 hunks)app/locales/sw/account.json
(1 hunks)app/routeTree.gen.ts
(11 hunks)app/routes/api/upload.ts
(1 hunks)app/routes/app/_layout/account.index.tsx
(1 hunks)app/server/auth.tsx
(1 hunks)app/server/routers/account.tsx
(2 hunks)docker-compose.yml
(2 hunks)package.json
(3 hunks)prisma/schema.prisma
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
🧬 Code graph analysis (6)
app/components/form/form-field-controller.tsx (1)
app/components/form/field-text/index.tsx (1)
FieldText
(24-82)
app/features/account/change-profile-picture-drawer.tsx (5)
app/features/auth/client.ts (1)
authClient
(13-22)app/features/account/schema.ts (2)
FormFieldsAccountUpdateProfilePicture
(13-15)zFormFieldsAccountUpdateProfilePicture
(16-19)app/lib/orpc/client.ts (1)
orpc
(24-24)app/components/ui/responsive-drawer.tsx (8)
ResponsiveDrawer
(34-40)ResponsiveDrawerTrigger
(42-49)ResponsiveDrawerContent
(78-85)ResponsiveDrawerHeader
(87-94)ResponsiveDrawerTitle
(114-121)ResponsiveDrawerDescription
(123-130)ResponsiveDrawerBody
(96-103)ResponsiveDrawerFooter
(105-112)app/components/form/form-field-controller.tsx (1)
FormFieldController
(64-131)
app/lib/object-storage/index.ts (1)
app/env/server.ts (1)
envServer
(9-44)
app/features/account/user-card.tsx (1)
app/features/account/change-profile-picture-drawer.tsx (1)
ChangeProfilePictureDrawer
(36-155)
app/routes/api/upload.ts (2)
app/lib/object-storage/index.ts (1)
s3client
(5-10)app/env/server.ts (1)
envServer
(9-44)
app/server/routers/account.tsx (2)
app/env/server.ts (1)
envServer
(9-44)app/lib/object-storage/index.ts (1)
s3client
(5-10)
🪛 GitHub Actions: 🧪 E2E Tests
app/env/server.ts
[error] 9-9: Environment validation failed during 'pnpm db:seed': Missing required environment variables: DOCKER_MINIO_USERNAME, DOCKER_MINIO_PASSWORD, S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY, S3_BUCKET_PUBLIC_URL, S3_ENDPOINT.
🪛 GitHub Check: 🧹 Linter
app/routes/api/upload.ts
[warning] 17-17:
Complete the task associated to this "TODO" comment
app/server/routers/account.tsx
[warning] 69-69:
Complete the task associated to this "TODO" comment
🪛 dotenv-linter (3.3.0)
.env.example
[warning] 4-4: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 5-5: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 6-6: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 7-7: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 7-7: [UnorderedKey] The DOCKER_MINIO_PASSWORD key should go before the DOCKER_MINIO_UI_PORT key
(UnorderedKey)
[warning] 11-11: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 11-11: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 13-13: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 14-14: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 15-15: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 15-15: [UnorderedKey] The S3_REGION key should go before the S3_SECRET_ACCESS_KEY key
(UnorderedKey)
[warning] 18-18: [DuplicatedKey] The S3_ENDPOINT key is duplicated
(DuplicatedKey)
[warning] 19-19: [DuplicatedKey] The S3_BUCKET_NAME key is duplicated
(DuplicatedKey)
[warning] 19-19: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 19-19: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 20-20: [DuplicatedKey] The S3_BUCKET_PUBLIC_URL key is duplicated
(DuplicatedKey)
[warning] 21-21: [DuplicatedKey] The S3_ACCESS_KEY_ID key is duplicated
(DuplicatedKey)
[warning] 21-21: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 22-22: [DuplicatedKey] The S3_SECRET_ACCESS_KEY key is duplicated
(DuplicatedKey)
[warning] 22-22: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 23-23: [DuplicatedKey] The S3_REGION key is duplicated
(DuplicatedKey)
[warning] 23-23: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 23-23: [UnorderedKey] The S3_REGION key should go before the S3_SECRET_ACCESS_KEY key
(UnorderedKey)
🔇 Additional comments (9)
app/components/form/form-field-controller.tsx (1)
108-109
: File case renders via FieldText — ensure special handling existsRouting
type="file"
toFieldText
is fine as long as the component omitsvalue
and forwardsFile|File[]
to RHF (see suggested fix in FieldText). Otherwise, this will misbehave.app/routes/app/_layout/account.index.tsx (1)
11-11
: LGTM: search state extended for profile picture flowThe new
'change-profile-picture'
state is correctly validated and defaults remain unchanged.app/locales/en/account.json (1)
23-31
: LGTM: new i18n keys for profile picture drawerKeys cover title, description, label, CTA, and toast messages. Consistent with existing naming.
app/locales/fr/account.json (1)
24-30
: FR translations look good and consistent with existing keys.app/routeTree.gen.ts (1)
39-39
: Generated route wiring for /api/upload looks correct.Also applies to: 182-186, 386-387, 397-398, 409-410, 420-446, 456-456, 646-652, 840-841
package.json (1)
44-44
: Verify pinned deps & advisoriespackage.json missing in sandbox — couldn't verify pins. npm registry: [email protected]; @aws-sdk/[email protected]. GitHub advisories query returned no advisories for better-upload. File: package.json (lines 44, 66). Confirm package.json is present, validate the pinned versions, and run npm audit / GH advisory checks before merging.
app/features/account/user-card.tsx (1)
21-49
: Nice UX additionWrapping the header in
ChangeProfilePictureDrawer
is clean and discoverable.app/server/routers/account.tsx (1)
53-55
: Keying strategy: use deterministic per-user key (avatars/${context.user.id})Frontend saves uploadedFile?.objectKey into profilePictureId (app/features/account/change-profile-picture-drawer.tsx:106); I could not find where objectKey is generated — ensure the upload backend/service (or src/lib/s3 config) writes the object as avatars/${context.user.id} (overwriting the previous file) to prevent bucket bloat.
app/features/account/change-profile-picture-drawer.tsx (1)
49-63
: Confirmed — useUploadFile returns objectKey; no change required.
Docs show the uploaded file object includes objectKey, so uploadedFile.objectKey is correct (applies to lines 49–63 and 101–108).
type: 'text' | 'email' | 'tel' | 'file'; | ||
containerProps?: ComponentProps<'div'>; | ||
} & ComponentProps<typeof Input> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File input is incorrectly controlled; pass files to RHF and omit value
For type="file"
, spreading ...field
injects a value
prop and forwards the raw event, which triggers React warnings and prevents RHF from receiving the File
/File[]
. Handle the file case specially.
Apply:
@@
- defaultValue={defaultValue}
+ defaultValue={type === 'file' ? undefined : defaultValue}
@@
- <Input
- type={type}
- id={ctx.id}
- aria-invalid={fieldState.error ? true : undefined}
- aria-describedby={
- !fieldState.error
- ? `${ctx.descriptionId}`
- : `${ctx.descriptionId} ${ctx.errorId}`
- }
- {...rest}
- {...field}
- onChange={(e) => {
- field.onChange(e);
- rest.onChange?.(e);
- }}
- onBlur={(e) => {
- field.onBlur();
- rest.onBlur?.(e);
- }}
- />
+ <Input
+ type={type}
+ id={ctx.id}
+ aria-invalid={fieldState.error ? true : undefined}
+ aria-describedby={
+ !fieldState.error
+ ? `${ctx.descriptionId}`
+ : `${ctx.descriptionId} ${ctx.errorId}`
+ }
+ {...rest}
+ // Avoid controlling file inputs; only pass minimal RHF bindings
+ name={field.name}
+ ref={field.ref}
+ {...(type === 'file'
+ ? {
+ onChange: (e: React.ChangeEvent<HTMLInputElement>) => {
+ const files = Array.from(e.target.files ?? []);
+ field.onChange(rest.multiple ? files : files[0] ?? null);
+ rest.onChange?.(e);
+ },
+ onBlur: (e: React.FocusEvent<HTMLInputElement>) => {
+ field.onBlur();
+ rest.onBlur?.(e);
+ },
+ }
+ : {
+ value: field.value,
+ onChange: (e: React.ChangeEvent<HTMLInputElement>) => {
+ field.onChange(e);
+ rest.onChange?.(e);
+ },
+ onBlur: (e: React.FocusEvent<HTMLInputElement>) => {
+ field.onBlur();
+ rest.onBlur?.(e);
+ },
+ })}
+ />
Also applies to: 57-76
🤖 Prompt for AI Agents
In app/components/form/field-text/index.tsx around lines 19-21 (and also apply
the same fix to lines 57-76), the component currently spreads ...field into the
Input which for type="file" passes a value prop and the raw event, causing React
warnings and preventing React Hook Form from receiving File/File[]; modify the
component to handle type==="file" specially by omitting the field.value when
spreading, and wire onChange to extract files (e.g., e.target.files) and call
field.onChange with the File or File[] and call field.onBlur as appropriate;
ensure you also do not pass a value prop to the native file input and keep other
props (name, ref) intact so RHF correctly registers the files.
DOCKER_MINIO_API_PORT: z.string().default('9000'), | ||
DOCKER_MINIO_UI_PORT: z.string().default('9001'), | ||
DOCKER_MINIO_USERNAME: z.string(), | ||
DOCKER_MINIO_PASSWORD: z.string(), | ||
S3_ACCESS_KEY_ID: z.string(), | ||
S3_SECRET_ACCESS_KEY: z.string(), | ||
S3_BUCKET_NAME: z.string().default('default'), | ||
S3_REGION: z.string().default('default'), | ||
S3_BUCKET_PUBLIC_URL: z.string(), | ||
S3_ENDPOINT: z.string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Env validation breaks CI; make S3/MinIO vars optional in non‑prod and validate as URLs.
E2E fails due to required S3/MinIO vars during db:seed (not needed there). Provide non‑prod defaults and use .url()
for URL-ish vars.
- DOCKER_MINIO_API_PORT: z.string().default('9000'),
- DOCKER_MINIO_UI_PORT: z.string().default('9001'),
- DOCKER_MINIO_USERNAME: z.string(),
- DOCKER_MINIO_PASSWORD: z.string(),
- S3_ACCESS_KEY_ID: z.string(),
- S3_SECRET_ACCESS_KEY: z.string(),
- S3_BUCKET_NAME: z.string().default('default'),
- S3_REGION: z.string().default('default'),
- S3_BUCKET_PUBLIC_URL: z.string(),
- S3_ENDPOINT: z.string(),
+ DOCKER_MINIO_API_PORT: z.string().default('9000'),
+ DOCKER_MINIO_UI_PORT: z.string().default('9001'),
+ DOCKER_MINIO_USERNAME: isProd ? z.string() : z.string().default('startui'),
+ DOCKER_MINIO_PASSWORD: isProd ? z.string() : z.string().default('startui'),
+ S3_ACCESS_KEY_ID: isProd ? z.string() : z.string().default('startui-access-key'),
+ S3_SECRET_ACCESS_KEY: isProd ? z.string() : z.string().default('startui-secret-key'),
+ S3_BUCKET_NAME: z.string().default('start-ui-bucket'),
+ S3_REGION: z.string().default('us-east-1'),
+ S3_BUCKET_PUBLIC_URL: (isProd ? z.string().url() : z.string().url().default('http://127.0.0.1:9000/start-ui-bucket')),
+ S3_ENDPOINT: (isProd ? z.string().url() : z.string().url().default('http://127.0.0.1:9000')),
If you prefer stricter gating, require these only when FILE_UPLOADS_ENABLED=true
. I can draft that follow-up.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
DOCKER_MINIO_API_PORT: z.string().default('9000'), | |
DOCKER_MINIO_UI_PORT: z.string().default('9001'), | |
DOCKER_MINIO_USERNAME: z.string(), | |
DOCKER_MINIO_PASSWORD: z.string(), | |
S3_ACCESS_KEY_ID: z.string(), | |
S3_SECRET_ACCESS_KEY: z.string(), | |
S3_BUCKET_NAME: z.string().default('default'), | |
S3_REGION: z.string().default('default'), | |
S3_BUCKET_PUBLIC_URL: z.string(), | |
S3_ENDPOINT: z.string(), | |
DOCKER_MINIO_API_PORT: z.string().default('9000'), | |
DOCKER_MINIO_UI_PORT: z.string().default('9001'), | |
DOCKER_MINIO_USERNAME: isProd ? z.string() : z.string().default('startui'), | |
DOCKER_MINIO_PASSWORD: isProd ? z.string() : z.string().default('startui'), | |
S3_ACCESS_KEY_ID: isProd ? z.string() : z.string().default('startui-access-key'), | |
S3_SECRET_ACCESS_KEY: isProd ? z.string() : z.string().default('startui-secret-key'), | |
S3_BUCKET_NAME: z.string().default('start-ui-bucket'), | |
S3_REGION: z.string().default('us-east-1'), | |
S3_BUCKET_PUBLIC_URL: (isProd ? z.string().url() : z.string().url().default('http://127.0.0.1:9000/start-ui-bucket')), | |
S3_ENDPOINT: (isProd ? z.string().url() : z.string().url().default('http://127.0.0.1:9000')), |
🤖 Prompt for AI Agents
In app/env/server.ts around lines 30 to 39, the current zod schema makes
S3/MinIO env vars required which breaks CI; update the schema to make S3/MinIO
variables optional or provide sensible non‑prod defaults (e.g., default empty
string or default ports) so db:seed and other non‑upload workflows don't fail,
and change URL-like fields (S3_BUCKET_PUBLIC_URL, S3_ENDPOINT) to use
z.string().url() (or z.string().optional().url()) to validate they are valid
URLs; alternatively (preferred for stricter control) only require these vars
when FILE_UPLOADS_ENABLED=true by gating their .required() validation behind
that flag.
import { | ||
type FormFieldsAccountUpdateProfilePicture, | ||
zFormFieldsAccountUpdateProfilePicture, | ||
} from '@/features/account/schema'; | ||
import { authClient } from '@/features/auth/client'; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Schema/field type mismatch: profilePicture is a file input but validated as string.
You never set the field value, so zod min(1) will fail and block submit. Either bind the selected file name to the field or, better, validate the presence of an uploadedFile before submit and remove the string constraint.
Minimal in-component guard (keeps schema unchanged but avoids undefined payload):
- onSubmit={async ({ profilePicture }) => {
- console.log('form', { profilePicture, uploadedFile });
- updateUser.mutate({ profilePictureId: uploadedFile?.objectKey });
- }}
+ onSubmit={async () => {
+ if (!uploadedFile?.objectKey) {
+ form.setError('profilePicture', {
+ message: t('account:changeProfilePictureDrawer.missingFile') || 'Please select a picture',
+ });
+ return;
+ }
+ updateUser.mutate({ profilePictureId: uploadedFile.objectKey });
+ }}
Preferable follow-up: update zFormFieldsAccountUpdateProfilePicture() to accept a File/FileList (or make it optional) and rely on the uploadedFile presence instead of a string length check. Do you want a patch for the schema too?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import { | |
type FormFieldsAccountUpdateProfilePicture, | |
zFormFieldsAccountUpdateProfilePicture, | |
} from '@/features/account/schema'; | |
import { authClient } from '@/features/auth/client'; | |
import { | |
type FormFieldsAccountUpdateProfilePicture, | |
zFormFieldsAccountUpdateProfilePicture, | |
} from '@/features/account/schema'; | |
import { authClient } from '@/features/auth/client'; | |
onSubmit={async () => { | |
if (!uploadedFile?.objectKey) { | |
form.setError('profilePicture', { | |
message: t('account:changeProfilePictureDrawer.missingFile') || 'Please select a picture', | |
}); | |
return; | |
} | |
updateUser.mutate({ profilePictureId: uploadedFile.objectKey }); | |
}} |
minio: | ||
profiles: [dev] | ||
image: minio/minio:RELEASE.2025-07-23T15-54-02Z-cpuv1 | ||
ports: | ||
- '${DOCKER_MINIO_API_PORT}:9000' | ||
- '${DOCKER_MINIO_UI_PORT}:9001' | ||
environment: | ||
- MINIO_ROOT_USER=${DOCKER_MINIO_USERNAME} | ||
- MINIO_ROOT_PASSWORD=${DOCKER_MINIO_PASSWORD} | ||
volumes: | ||
- minio:/data/minio | ||
command: minio server /data/minio --console-address :${DOCKER_MINIO_UI_PORT} | ||
healthcheck: | ||
test: ['CMD', 'mc', 'ready', 'local'] | ||
interval: 5s | ||
timeout: 5s | ||
retries: 5 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
MinIO healthcheck uses mc which isn’t in the minio image.
The official minio/minio image doesn’t ship mc. Use the built-in HTTP health endpoints instead.
minio:
profiles: [dev]
image: minio/minio:RELEASE.2025-07-23T15-54-02Z-cpuv1
@@
- healthcheck:
- test: ['CMD', 'mc', 'ready', 'local']
- interval: 5s
- timeout: 5s
- retries: 5
+ healthcheck:
+ test: ['CMD-SHELL', 'curl -fsS http://localhost:9000/minio/health/ready || exit 1']
+ interval: 5s
+ timeout: 5s
+ retries: 5
If curl isn’t available in this image, switch to a tiny sidecar for health or install curl, or alternatively remove the healthcheck and add a readiness loop in createbucket. Let me know if you prefer a sidecar example.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
docker-compose.yml lines 19-36: the healthcheck currently runs "mc" which is not
present in the official minio/minio image; replace it with an HTTP readiness
probe against MinIO's built-in endpoint (e.g. GET
http://localhost:9000/minio/health/ready) using curl/wget in a CMD or CMD-SHELL
healthcheck, keeping interval/timeout/retries as appropriate; if the image lacks
curl/wget, either add a tiny sidecar container that performs the HTTP probe,
install a minimal probe tool in the image, or remove the service-level
healthcheck and implement a readiness loop when creating buckets—choose one of
those options and update the healthcheck section accordingly.
createbucket: | ||
image: minio/mc | ||
depends_on: | ||
minio: | ||
condition: service_healthy | ||
entrypoint: [] | ||
command: ["sh", "-c", " | ||
mc alias set default http://minio:${DOCKER_MINIO_API_PORT} \"${DOCKER_MINIO_USERNAME}\" \"${DOCKER_MINIO_PASSWORD}\"; | ||
mc admin user add default ${S3_ACCESS_KEY_ID} ${S3_SECRET_ACCESS_KEY}; | ||
mc admin policy attach default readwrite --user ${S3_ACCESS_KEY_ID}; | ||
mc mb --ignore-existing default/${S3_BUCKET_NAME} 2>/dev/null; | ||
mc anonymous set download default/${S3_BUCKET_NAME}; | ||
echo 'Bucket configuration completed successfully'; | ||
" | ||
] | ||
restart: "no" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Target the container port from createbucket; and gate it behind the dev profile.
Inside the Compose network, use minio:9000, not the host port env. Also mark createbucket as dev-only.
createbucket:
image: minio/mc
+ profiles: [dev]
@@
- command: ["sh", "-c", "
- mc alias set default http://minio:${DOCKER_MINIO_API_PORT} \"${DOCKER_MINIO_USERNAME}\" \"${DOCKER_MINIO_PASSWORD}\";
+ command: ["sh", "-c", "
+ mc alias set default http://minio:9000 \"${DOCKER_MINIO_USERNAME}\" \"${DOCKER_MINIO_PASSWORD}\";
mc admin user add default ${S3_ACCESS_KEY_ID} ${S3_SECRET_ACCESS_KEY};
mc admin policy attach default readwrite --user ${S3_ACCESS_KEY_ID};
mc mb --ignore-existing default/${S3_BUCKET_NAME} 2>/dev/null;
mc anonymous set download default/${S3_BUCKET_NAME};
echo 'Bucket configuration completed successfully';
"
]
🤖 Prompt for AI Agents
In docker-compose.yml around lines 37 to 53, the createbucket service is using
the host env port and runs in all profiles; change the MinIO endpoint to use the
internal container address and port (http://minio:9000) instead of
http://minio:${DOCKER_MINIO_API_PORT}, and mark the createbucket service as
dev-only by adding the appropriate profiles key (e.g., profiles: ["dev"]) so it
only runs in the dev profile; keep the existing depends_on health gating and
leave the rest of the entrypoint/commands unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- demo mode block upload
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
package.json (1)
34-34
: Makedk:init
more reliable by waiting for services to be healthyCompose supports
--wait
to block until healthchecks pass. This avoids races when MinIO is needed immediately after init.- "dk:init": "docker compose --profile init up -d", + "dk:init": "docker compose --profile init up -d --wait",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docker-compose.yml
(1 hunks)package.json
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- docker-compose.yml
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
🔇 Additional comments (1)
package.json (1)
44-44
: No action required — @aws-sdk/client-s3 is only imported in server code. Import found at app/server/routers/account.tsx:1; no matches under app/components, app/features, or app/routes.
package.json
Outdated
"@tanstack/zod-adapter": "1.121.34", | ||
"@uidotdev/usehooks": "2.4.1", | ||
"better-auth": "1.2.7", | ||
"better-upload": "1.1.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Harden file uploads (limits, type checks, storage keys) with better-upload
With uploads enabled, enforce server-side protections and consistent keys.
- Enforce max file size, file count, and accepted MIME types using magic-bytes detection (not just extensions).
- Reject MIME/extension mismatches and strip metadata; consider basic image transforms for avatars.
- Rate-limit and set per-user quotas; add timeouts and backpressure.
- Store avatars under a stable key to overwrite prior uploads (per your retrieved learning: use
avatars/${user.id}
); confirm this is what your uploader uses. - Log upload attempts and surface errors via i18n.
Quick checks:
If missing, I can draft the upload handler with strict limits and S3 keying.
🏁 Script executed:
#!/bin/bash
# Look for upload config knobs; ensure they exist and are non-trivial.
rg -nP -C2 '\b(max(File)?Size|maxBytes|fileSize|maxFiles|allowed(Mime|Types)|content[-_ ]?type|magic|sniff)\b' app || true
# Ensure avatar S3 keys follow the overwrite pattern.
rg -nP -C2 'avatars/\$\{?user\.id\}?|avatars/.+user\.id' app || true
Length of output: 337
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== show upload.ts ==="
nl -ba app/routes/api/upload.ts || true
echo
echo "=== search for explicit avatars/${user.id} key patterns ==="
rg -nP -C2 'avatars/\$\{?user\.id\}?|avatars/.+user\.id' -S || true
echo
echo "=== search for general 'avatar' occurrences ==="
rg -n -C2 '\bavatar\b' || true
echo
echo "=== search for S3 / storage usage ==="
rg -nP -C2 '\b(S3Client|S3|s3|putObject|PutObjectCommand|uploadFile|upload_to_s3|uploadToS3|createPresignedPost|presign|presigned|bucket|put_object)\b' -S || true
echo
echo "=== search for file-type / magic-bytes detection libs ==="
rg -nP -C2 '\b(file-type|filetype|magic|magic-bytes|detect-file-type|file-type-from-buffer|fromBuffer)\b' -S || true
echo
echo "=== search for image transforms / metadata stripping ==="
rg -nP -C2 'sharp\(|jimp|imagemagick|gm\(|exif|stripMetadata|strip_meta|removeMetadata' -S || true
echo
echo "=== search for rate-limit / quota / timeout hints ==="
rg -nP -C2 'rate[-_ ]?limit|quota|per[-_ ]?user|throttle|backpressure|timeout' -S || true
Length of output: 161
🏁 Script executed:
#!/bin/bash
# safer re-check of upload handler and related patterns (avoid nl, avoid unescaped ${})
set -e
file="app/routes/api/upload.ts"
if [ -f "$file" ]; then
printf "=== show %s ===\n" "$file"
awk '{printf("%5d %s\n", NR, $0)}' "$file"
else
printf "NOT FOUND: %s\n" "$file"
fi
printf "\n=== search for explicit avatars/\${user.id} key patterns ===\n"
rg -nP -C2 'avatars/\$\{?user\.id\}?|avatars/.+user\.id' -S || true
printf "\n=== search for general 'avatar' occurrences ===\n"
rg -n -C2 '\bavatar\b' -S || true
printf "\n=== search for S3 / storage usage ===\n"
rg -nP -C2 '\b(S3Client|S3|s3|putObject|PutObjectCommand|uploadFile|upload_to_s3|uploadToS3|createPresignedPost|presign|presigned|bucket|put_object)\b' -S || true
printf "\n=== search for file-type / magic-bytes detection libs ===\n"
rg -nP -C2 '\b(file-type|filetype|magic|magic-bytes|detect-file-type|file-type-from-buffer|fromBuffer)\b' -S || true
printf "\n=== search for image transforms / metadata stripping ===\n"
rg -nP -C2 'sharp\(|jimp|imagemagick|gm\(|exif|stripMetadata|strip_meta|removeMetadata' -S || true
printf "\n=== search for rate-limit / quota / timeout hints ===\n"
rg -nP -C2 'rate[-_ ]?limit|quota|per[-_ ]?user|throttle|backpressure|timeout' -S || true
Length of output: 39468
Harden avatar uploads: add content‑sniffing, stable S3 keying, auth, quotas & metadata stripping
Found: app/routes/api/upload.ts — avatar route sets fileTypes: ['image/*'] and maxFileSize: 4MB, but onBeforeUpload is TODO and there’s no explicit avatar S3 key/config.
Actionable fixes:
- Add server-side magic-bytes/content-sniffing (reject MIME/extension mismatches).
- Strip EXIF/metadata and apply basic image transforms (resize/normalize) for avatars.
- Implement auth/authorization in onBeforeUpload and set the object key to avatars/${user.id} so uploads overwrite previous avatars (coordinate with app/server/routers/account.tsx where DeleteObjectCommand is used).
- Add per-user rate-limits/quotas and request timeouts/backpressure.
- Log upload attempts and surface i18n error keys (account locale files already contain avatar messages).
🤖 Prompt for AI Agents
package.json around line 66: The avatar upload endpoint needs
hardening—implement server-side content-sniffing to validate magic bytes vs
declared MIME/extension and reject mismatches; in onBeforeUpload enforce
auth/authorization, per-user rate-limits/quotas and request
timeouts/backpressure, and map errors to existing i18n keys; process uploaded
images to strip EXIF/metadata and apply basic transforms (resize/normalize)
before storage; and use a stable S3 object key pattern avatars/${user.id} so
uploads overwrite previous avatars (coordinate with account deletion code), plus
add structured logging for all upload attempts including outcomes and reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/server/routers/account.tsx (1)
85-91
: Don’t blank name when updating only avatar; update name conditionallyCurrently, if you remove the early
return
, the unconditional update would setname
to''
when not provided. Make it conditional:- await context.db.user.update({ - where: { id: context.user.id }, - data: { - name: input.name ?? '', - }, - }); + if (input.name !== undefined) { + await context.db.user.update({ + where: { id: context.user.id }, + data: { name: input.name ?? '' }, + }); + }
♻️ Duplicate comments (2)
app/features/account/user-card.tsx (1)
20-22
: Fix AvatarImage src; avoid "undefined" URL and normalize joinConstruct the URL only when both base and id exist; otherwise pass
undefined
so no request is made. Also join vianew URL
to avoid double slashes.- const getImageUrl = () => { - return `${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${session.data?.user.avatarFileId}`; - }; + const avatarSrc = + session.data?.user.avatarFileId && envClient.VITE_S3_BUCKET_PUBLIC_URL + ? new URL( + session.data.user.avatarFileId, + envClient.VITE_S3_BUCKET_PUBLIC_URL + ).toString() + : undefined;- {session.data?.user.avatarFileId ? ( - <AvatarImage - src={getImageUrl()} - alt={session.data?.user.name} - /> - ) : ( + {session.data?.user.avatarFileId ? ( + <AvatarImage + src={avatarSrc} + alt={session.data?.user.name ?? 'Avatar'} + /> + ) : (Also applies to: 31-35
app/server/routers/account.tsx (1)
53-67
: Guard S3 delete and fix log field names; delete only when key changedAvoid sending DeleteObject with an undefined key and skip deletion when the new key matches the old one.
- // If profilePictureId is defined, check to delete the old one - if (input.avatarFileId) { - // Remove old file if there was one - const deleteCommand = new DeleteObjectCommand({ - Bucket: envServer.S3_BUCKET_NAME, - Key: context.user.avatarFileId, - }); - try { - await s3client.send(deleteCommand); - } catch (error) { - context.logger.warn('Fail to delete user profile picture', { - profilePictureId: context.user.avatarFileId, - error, - }); - } + if (input.avatarFileId) { + if ( + context.user.avatarFileId && + context.user.avatarFileId !== input.avatarFileId + ) { + const deleteCommand = new DeleteObjectCommand({ + Bucket: envServer.S3_BUCKET_NAME, + Key: context.user.avatarFileId, + }); + try { + await s3client.send(deleteCommand); + } catch (error) { + context.logger.warn('Fail to delete user avatar', { + avatarFileId: context.user.avatarFileId, + error, + }); + } + }
🧹 Nitpick comments (1)
app/server/routers/account.tsx (1)
69-82
: Persist new avatarFileId (ok) — align logs with new naming- } catch (error) { - context.logger.warn('Fail to save user profile picture', { - profilePictureId: context.user.avatarFileId, - error, - }); - } - return; + } catch (error) { + context.logger.warn('Fail to save user avatar', { + avatarFileId: input.avatarFileId, + error, + }); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
app/env/client.ts
(1 hunks)app/env/server.ts
(1 hunks)app/features/account/change-profile-picture-drawer.tsx
(1 hunks)app/features/account/schema.ts
(1 hunks)app/features/account/user-card.tsx
(1 hunks)app/features/user/schema.ts
(1 hunks)app/lib/s3/index.ts
(1 hunks)app/routes/api/upload.ts
(1 hunks)app/server/auth.tsx
(1 hunks)app/server/routers/account.tsx
(2 hunks)prisma/schema.prisma
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- app/server/auth.tsx
- app/env/server.ts
- app/features/account/change-profile-picture-drawer.tsx
- prisma/schema.prisma
- app/features/account/schema.ts
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
📚 Learning: 2024-10-14T15:29:53.279Z
Learnt from: ivan-dalmet
PR: BearStudio/start-ui-web#532
File: src/server/config/oauth/providers/discord.ts:11-11
Timestamp: 2024-10-14T15:29:53.279Z
Learning: In `src/server/config/oauth/providers/discord.ts`, when defining the `zDiscordUser` schema, keep fields like `username` as nullable (e.g., `.nullish()`), as the Discord API might return null values despite the documentation stating otherwise.
Applied to files:
app/features/user/schema.ts
📚 Learning: 2025-02-21T08:38:40.338Z
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
Applied to files:
app/server/routers/account.tsx
app/routes/api/upload.ts
🧬 Code graph analysis (4)
app/lib/s3/index.ts (1)
app/env/server.ts (1)
envServer
(9-43)
app/features/account/user-card.tsx (2)
app/env/client.ts (1)
envClient
(24-54)app/features/account/change-profile-picture-drawer.tsx (1)
ChangeProfilePictureDrawer
(37-160)
app/server/routers/account.tsx (2)
app/env/server.ts (1)
envServer
(9-43)app/lib/s3/index.ts (1)
s3client
(5-12)
app/routes/api/upload.ts (3)
app/server/router.ts (1)
router
(11-16)app/lib/s3/index.ts (1)
s3client
(5-12)app/env/server.ts (1)
envServer
(9-43)
🪛 GitHub Actions: 🔎 Code Quality
app/env/client.ts
[error] 24-24: Invalid environment variables: 'VITE_S3_BUCKET_PUBLIC_URL' is required but undefined.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Playwright E2E Tests
🔇 Additional comments (2)
app/features/user/schema.ts (1)
27-27
: LGTM: schema aligns with new avatar storage modelSwitch to
avatarFileId
is consistent with the backend and UI changes.app/routes/api/upload.ts (1)
12-21
: Good: explicit image allowlist (SVG excluded)
@@ -43,6 +43,7 @@ export const envClient = createEnv({ | |||
.string() | |||
.optional() | |||
.transform((value) => value ?? (isDev ? 'gold' : 'plum')), | |||
VITE_S3_BUCKET_PUBLIC_URL: z.string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Build is red: make URL typed and ensure it’s provided in CI
VITE_S3_BUCKET_PUBLIC_URL
is required but undefined in the pipeline. Also validate it as a URL.
Apply:
- VITE_S3_BUCKET_PUBLIC_URL: z.string(),
+ VITE_S3_BUCKET_PUBLIC_URL: z.string().url(),
Follow‑ups:
- Add this var to .env.example, local envs, and CI (or set SKIP_ENV_VALIDATION only where acceptable).
- It must be an absolute base URL (e.g., https://cdn.example.com/bucket/).
Run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Usages:"
rg -n --heading 'VITE_S3_BUCKET_PUBLIC_URL' app
echo -e "\nCheck .env example(s):"
fd -H '^\.env(\..+)?$' | while read -r f; do
echo "==> $f"
rg -n 'VITE_S3_BUCKET_PUBLIC_URL' "$f" || true
done
Length of output: 239
Require & validate VITE_S3_BUCKET_PUBLIC_URL and add it to envs/CI
- Change validation in app/env/client.ts:
- VITE_S3_BUCKET_PUBLIC_URL: z.string(),
+ VITE_S3_BUCKET_PUBLIC_URL: z.string().url(),
- Add VITE_S3_BUCKET_PUBLIC_URL to .env.example and the pipeline/CI secrets with an absolute base URL.
- Usage note: app/features/account/user-card.tsx:21 builds
${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${session.data?.user.avatarFileId}
— normalize the base (no trailing slash) or update the join logic to avoid double slashes.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
VITE_S3_BUCKET_PUBLIC_URL: z.string(), | |
VITE_S3_BUCKET_PUBLIC_URL: z.string().url(), |
🤖 Prompt for AI Agents
In app/env/client.ts around line 46 add validation for VITE_S3_BUCKET_PUBLIC_URL
(z.string()) so the client env includes the S3 public base URL; then update
.env.example and your CI/pipeline secrets to provide an absolute base URL;
finally ensure the value is normalized (strip any trailing slash) or update the
usage at app/features/account/user-card.tsx (where it builds
`${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${session.data?.user.avatarFileId}`) to
safely join parts and avoid double slashes.
export const s3client = new S3Client({ | ||
endpoint: envServer.S3_ENDPOINT, | ||
credentials: { | ||
accessKeyId: envServer.S3_ACCESS_KEY_ID, | ||
secretAccessKey: envServer.S3_SECRET_ACCESS_KEY, | ||
}, | ||
region: envServer.S3_REGION, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enable path-style access for MinIO and improve robustness
MinIO typically requires path-style addressing. Without it, requests may fail locally.
Apply:
export const s3client = new S3Client({
endpoint: envServer.S3_ENDPOINT,
+ forcePathStyle: true,
credentials: {
accessKeyId: envServer.S3_ACCESS_KEY_ID,
secretAccessKey: envServer.S3_SECRET_ACCESS_KEY,
},
region: envServer.S3_REGION,
+ maxAttempts: 3,
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const s3client = new S3Client({ | |
endpoint: envServer.S3_ENDPOINT, | |
credentials: { | |
accessKeyId: envServer.S3_ACCESS_KEY_ID, | |
secretAccessKey: envServer.S3_SECRET_ACCESS_KEY, | |
}, | |
region: envServer.S3_REGION, | |
}); | |
export const s3client = new S3Client({ | |
endpoint: envServer.S3_ENDPOINT, | |
forcePathStyle: true, | |
credentials: { | |
accessKeyId: envServer.S3_ACCESS_KEY_ID, | |
secretAccessKey: envServer.S3_SECRET_ACCESS_KEY, | |
}, | |
region: envServer.S3_REGION, | |
maxAttempts: 3, | |
}); |
🤖 Prompt for AI Agents
In app/lib/s3/index.ts around lines 5 to 12, the S3Client is missing path-style
addressing which MinIO requires; update the S3Client config to include
forcePathStyle: true so requests use path-style addressing for MinIO, and keep
the existing endpoint, credentials and region values intact to maintain current
behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.env.example (1)
27-29
: DATABASE_URL references undefined DOCKER_DATABASE_ vars in this template.*Either reintroduce the vars or inline a self-contained local DSN to prevent confusion.
-# DATABASE -DATABASE_URL="postgres://${DOCKER_DATABASE_USERNAME}:${DOCKER_DATABASE_PASSWORD}@localhost:${DOCKER_DATABASE_PORT}/${DOCKER_DATABASE_NAME}" +# DATABASE +# Local example DSN (adjust as needed) +DATABASE_URL=postgres://postgres:postgres@localhost:5432/startuiAlternatively, add:
DOCKER_DATABASE_PORT=5432 DOCKER_DATABASE_NAME=startui DOCKER_DATABASE_USERNAME=postgres DOCKER_DATABASE_PASSWORD=postgres
♻️ Duplicate comments (2)
.env.example (2)
17-17
: Public URL now targets API port — looks good. Ensure access policy.LGTM and resolves the earlier “console port” issue. If the bucket isn’t public, fetching
${VITE_S3_BUCKET_PUBLIC_URL}/${key}
will 403; either set a public-read bucket policy for dev or use signed URLs.
9-14
: Fix undefined var in S3_ENDPOINT; align ordering and quotes.DOCKER_MINIO_FORWARD_PORT isn’t defined here; use DOCKER_MINIO_API_PORT. Also unquote and reorder per linter.
-# S3 -S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_FORWARD_PORT}" -S3_BUCKET_NAME="start-ui-bucket" -S3_ACCESS_KEY_ID="startui-access-key" -S3_SECRET_ACCESS_KEY="startui-secret-key" -S3_REGION="default" +# S3 +S3_ACCESS_KEY_ID=startui-access-key +S3_BUCKET_NAME=start-ui-bucket +S3_ENDPOINT=http://127.0.0.1:${DOCKER_MINIO_API_PORT} +S3_REGION=default +S3_SECRET_ACCESS_KEY=startui-secret-key + +# If you expose MinIO via a forwarded port/reverse-proxy, document it separately to avoid collisions, e.g.: +# FORWARDED_S3_ENDPOINT=http://127.0.0.1:${DOCKER_MINIO_FORWARD_PORT}
🧹 Nitpick comments (2)
.env.example (2)
3-7
: Unquote values and fix key order (dotenv-linter).Remove quotes for numbers/identifiers and order keys so PASSWORD comes before UI_PORT to satisfy lint hints.
# OBJECT STORAGE -DOCKER_MINIO_API_PORT="9000" -DOCKER_MINIO_UI_PORT="9001" -DOCKER_MINIO_USERNAME="startui" -DOCKER_MINIO_PASSWORD="startui" +DOCKER_MINIO_API_PORT=9000 +DOCKER_MINIO_USERNAME=startui +DOCKER_MINIO_PASSWORD=startui +DOCKER_MINIO_UI_PORT=9001
31-31
: Fix grammar in comment.Change “to a generated secret” → “to generate a secret”.
-BETTER_AUTH_SECRET="REPLACE ME" # You can use `npx @better-auth/cli@latest secret` to a generated secret +BETTER_AUTH_SECRET="REPLACE ME" # You can use `npx @better-auth/cli@latest secret` to generate a secret
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.env.example
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
🪛 dotenv-linter (3.3.0)
.env.example
[warning] 4-4: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 5-5: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 6-6: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 7-7: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 7-7: [UnorderedKey] The DOCKER_MINIO_PASSWORD key should go before the DOCKER_MINIO_UI_PORT key
(UnorderedKey)
[warning] 11-11: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 11-11: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 12-12: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 12-12: [UnorderedKey] The S3_ACCESS_KEY_ID key should go before the S3_BUCKET_NAME key
(UnorderedKey)
[warning] 13-13: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 14-14: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 14-14: [UnorderedKey] The S3_REGION key should go before the S3_SECRET_ACCESS_KEY key
(UnorderedKey)
a750dda
to
c49d5bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (4)
src/components/form/field-text/index.tsx (1)
47-76
: File input is incorrectly controlled.For
type="file"
, spreading{...field}
passes avalue
prop to the native file input, which triggers React warnings and prevents React Hook Form from receiving theFile
/File[]
objects. File inputs must be uncontrolled.Apply this diff to handle file inputs specially:
<Controller name={name} control={control} disabled={disabled} - defaultValue={defaultValue} + defaultValue={type === 'file' ? undefined : defaultValue} shouldUnregister={shouldUnregister} render={({ field, fieldState }) => ( <div {...containerProps} className={cn( 'flex flex-1 flex-col gap-1', containerProps?.className )} > <Input type={type} id={ctx.id} aria-invalid={fieldState.error ? true : undefined} aria-describedby={ !fieldState.error ? `${ctx.descriptionId}` : `${ctx.descriptionId} ${ctx.errorId}` } {...rest} - {...field} - onChange={(e) => { - field.onChange(e); - rest.onChange?.(e); - }} - onBlur={(e) => { - field.onBlur(); - rest.onBlur?.(e); - }} + name={field.name} + ref={field.ref} + {...(type === 'file' + ? { + onChange: (e: React.ChangeEvent<HTMLInputElement>) => { + const files = Array.from(e.target.files ?? []); + field.onChange(rest.multiple ? files : files[0] ?? null); + rest.onChange?.(e); + }, + onBlur: (e: React.FocusEvent<HTMLInputElement>) => { + field.onBlur(); + rest.onBlur?.(e); + }, + } + : { + value: field.value, + onChange: (e: React.ChangeEvent<HTMLInputElement>) => { + field.onChange(e); + rest.onChange?.(e); + }, + onBlur: (e: React.FocusEvent<HTMLInputElement>) => { + field.onBlur(); + rest.onBlur?.(e); + }, + })} /> <FormFieldError /> </div> )} />docker-compose.yml (2)
18-34
: Healthcheck references unavailablemc
command.The healthcheck runs
mc ready local
, but the officialminio/minio
image doesn't include themc
CLI tool.Use MinIO's built-in HTTP health endpoint instead:
healthcheck: - test: ['CMD', 'mc', 'ready', 'local'] + test: ['CMD-SHELL', 'curl -f http://localhost:9000/minio/health/ready || exit 1'] interval: 5s timeout: 5s retries: 5
35-52
: Use internal container port in alias configuration.The
createbucket
service references${DOCKER_MINIO_API_PORT}
when configuring the MinIO alias. Within the Docker Compose network, the service should connect to the container's internal port (9000) rather than the host-mapped port.Apply this diff:
command: ["sh", "-c", " - mc alias set default http://minio:${DOCKER_MINIO_API_PORT} \"${DOCKER_MINIO_USERNAME}\" \"${DOCKER_MINIO_PASSWORD}\"; + mc alias set default http://minio:9000 \"${DOCKER_MINIO_USERNAME}\" \"${DOCKER_MINIO_PASSWORD}\"; mc admin user add default ${S3_ACCESS_KEY_ID} ${S3_SECRET_ACCESS_KEY}; mc admin policy attach default readwrite --user ${S3_ACCESS_KEY_ID}; mc mb --ignore-existing default/${S3_BUCKET_NAME} 2>/dev/null; mc anonymous set download default/${S3_BUCKET_NAME}; echo 'Bucket configuration completed successfully'; " ]src/env/client.ts (1)
46-46
: Pipeline failure: VITE_S3_BUCKET_PUBLIC_URL is undefined and lacks URL validation.The test environment fails because
VITE_S3_BUCKET_PUBLIC_URL
is required but not provided. Additionally, the schema should validate it as a URL.Apply this diff to add URL validation:
- VITE_S3_BUCKET_PUBLIC_URL: z.string(), + VITE_S3_BUCKET_PUBLIC_URL: z.string().url(),Then add
VITE_S3_BUCKET_PUBLIC_URL
to:
.env.example
with a sample value (e.g.,http://localhost:9000/default
)- Your CI environment variables with the appropriate public base URL
Note: Ensure the URL does not have a trailing slash, or update usages like
${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${avatarFileId}
to handle path joining correctly.
🧹 Nitpick comments (2)
src/features/account/user-card.tsx (1)
20-22
: Add type safety to URL construction.The
getImageUrl
function doesn't validate thatavatarFileId
exists before constructing the URL, which could produce malformed URLs if called whenavatarFileId
is undefined/null. While the current usage (line 31) checks before calling, the function itself should be defensive.Apply this diff to add a safety check:
- const getImageUrl = () => { - return `${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${session.data?.user.avatarFileId}`; - }; + const getImageUrl = (fileId: string) => { + return `${envClient.VITE_S3_BUCKET_PUBLIC_URL}/${fileId}`; + };And update the call site:
- <AvatarImage - src={getImageUrl()} - alt={session.data?.user.name} - /> + <AvatarImage + src={getImageUrl(session.data.user.avatarFileId)} + alt={session.data?.user.name} + />src/lib/s3/index.ts (1)
1-12
: Distinctly name S3 clients to prevent ambiguity.Both src/lib/s3/index.ts (AWS SDK) and src/lib/object-storage/index.ts (MinIO) export
s3client
. Rename one or both exports (e.g.,awsS3Client
,minioClient
) and update import sites to clearly differentiate implementations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
.env.example
(2 hunks)docker-compose.yml
(1 hunks)package.json
(3 hunks)prisma/schema.prisma
(1 hunks)src/components/form/field-text/index.tsx
(1 hunks)src/components/form/form-field-controller.tsx
(1 hunks)src/env/client.ts
(1 hunks)src/env/server.ts
(1 hunks)src/features/account/change-profile-picture-drawer.tsx
(1 hunks)src/features/account/schema.ts
(1 hunks)src/features/account/user-card.tsx
(1 hunks)src/features/user/schema.ts
(1 hunks)src/lib/object-storage/index.ts
(1 hunks)src/lib/s3/index.ts
(1 hunks)src/locales/ar/account.json
(1 hunks)src/locales/en/account.json
(1 hunks)src/locales/fr/account.json
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- package.json
- prisma/schema.prisma
🧰 Additional context used
🧬 Code graph analysis (6)
src/features/account/user-card.tsx (2)
src/env/client.ts (1)
envClient
(24-54)src/features/account/change-profile-picture-drawer.tsx (1)
ChangeProfilePictureDrawer
(37-160)
src/features/account/schema.ts (1)
src/lib/zod/zod-utils.ts (1)
zu
(4-92)
src/components/form/form-field-controller.tsx (1)
src/components/form/field-text/index.tsx (1)
FieldText
(24-82)
src/lib/s3/index.ts (1)
src/env/server.ts (1)
envServer
(9-44)
src/lib/object-storage/index.ts (1)
src/env/server.ts (1)
envServer
(9-44)
src/features/account/change-profile-picture-drawer.tsx (2)
src/features/account/schema.ts (2)
FormFieldsAccountUpdateProfilePicture
(13-15)zFormFieldsAccountUpdateProfilePicture
(16-19)src/components/form/form-field-controller.tsx (1)
FormFieldController
(64-131)
🪛 dotenv-linter (3.3.0)
.env.example
[warning] 9-9: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 10-10: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 11-11: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 12-12: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 12-12: [UnorderedKey] The DOCKER_MINIO_PASSWORD key should go before the DOCKER_MINIO_UI_PORT key
(UnorderedKey)
[warning] 16-16: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 16-16: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 17-17: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 17-17: [UnorderedKey] The S3_ACCESS_KEY_ID key should go before the S3_BUCKET_NAME key
(UnorderedKey)
[warning] 18-18: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 19-19: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 19-19: [UnorderedKey] The S3_REGION key should go before the S3_SECRET_ACCESS_KEY key
(UnorderedKey)
[warning] 23-23: [UnorderedKey] The VITE_BASE_URL key should go before the VITE_S3_BUCKET_PUBLIC_URL key
(UnorderedKey)
[warning] 37-37: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
🪛 GitHub Actions: 🔎 Code Quality
src/env/client.ts
[error] 24-25: Invalid environment variables: VITE_S3_BUCKET_PUBLIC_URL is required (environment check failed during test setup).
🪛 GitHub Actions: 🧪 E2E Tests
src/env/server.ts
[error] 9-9: Invalid environment variables: DOCKER_MINIO_USERNAME, DOCKER_MINIO_PASSWORD, S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY, S3_ENDPOINT are missing or invalid. Command 'pnpm db:seed' failed with exit code 1.
🔇 Additional comments (10)
src/components/form/form-field-controller.tsx (1)
108-109
: LGTM!The new
file
case correctly routes toFieldText
, mirroring the existing pattern fortext
,tel
types.src/components/form/field-text/index.tsx (1)
19-19
: Type extended to support file inputs.The union now includes
'file'
, enabling file input rendering through this component.src/features/account/schema.ts (1)
13-19
: LGTM!The new profile picture update schema follows the established pattern and correctly validates
avatarFileId
as a non-empty string.src/locales/en/account.json (1)
24-30
: Translations verified across locales changeProfilePictureDrawer entries exist in en, fr, ar, and sw locale files with matching keys.src/locales/fr/account.json (1)
24-31
: LGTM!The French translations for the profile picture drawer are well-formed and consistent with the existing
changeNameDrawer
pattern.src/locales/ar/account.json (1)
24-31
: LGTM!The Arabic translations for the profile picture drawer are well-structured and follow the same pattern as the existing
changeNameDrawer
block.src/features/account/user-card.tsx (1)
27-52
: LGTM!The avatar display logic correctly checks for
avatarFileId
existence before rendering the image, and falls back toAvatarFallback
when not present. The integration withChangeProfilePictureDrawer
provides good UX..env.example (1)
9-22
: Pipeline failure: Add env vars to CI/CD configuration.The E2E test pipeline is failing because the new MinIO and S3 environment variables are missing from the CI/CD environment. You'll need to add these variables to your GitHub Actions secrets/environment configuration.
Required variables for CI/CD:
DOCKER_MINIO_USERNAME
DOCKER_MINIO_PASSWORD
S3_ACCESS_KEY_ID
S3_SECRET_ACCESS_KEY
S3_ENDPOINT
src/features/account/change-profile-picture-drawer.tsx (1)
99-102
: Verify mutation payload validation.The mutation receives
avatarFileId: avatarUpload.uploadedFile?.objectKey
, which could beundefined
if the upload hasn't completed. While the disabled logic (lines 147-150) should prevent this, it would be safer to add a guard.Consider adding a check before mutation:
onSubmit={async () => { if (!avatarUpload.uploadedFile?.objectKey) { form.setError('avatarFileId', { message: t('account:changeProfilePictureDrawer.errorMessage'), }); return; } updateUser.mutate({ avatarFileId: avatarUpload.uploadedFile.objectKey, }); }}src/env/server.ts (1)
31-39
: LGTM!The environment variable schema definitions are correct and follow security best practices by requiring explicit values for credentials (no defaults). The default values for ports, bucket name, and region are sensible.
Note: The pipeline failure is expected since these variables need to be added to your CI/CD configuration before the E2E tests can pass.
DOCKER_MINIO_PASSWORD="minioadmin" | ||
|
||
# S3 | ||
S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_FORWARD_PORT}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix undefined variable reference.
Line 15 references DOCKER_MINIO_FORWARD_PORT
which is not defined anywhere in this file. This will cause the S3 endpoint configuration to fail.
Apply this diff to use the correct variable:
-S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_FORWARD_PORT}"
+S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_API_PORT}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_FORWARD_PORT}" | |
S3_ENDPOINT="http://127.0.0.1:${DOCKER_MINIO_API_PORT}" |
🤖 Prompt for AI Agents
In .env.example around line 15, the S3_ENDPOINT references an undefined variable
DOCKER_MINIO_FORWARD_PORT; either change that reference to the actual port
variable used elsewhere in this file (for example DOCKER_MINIO_PORT or
MINIO_PORT) or add a definition for DOCKER_MINIO_FORWARD_PORT above so the
variable is defined; update the line so S3_ENDPOINT uses a defined port
variable.
const form = useForm<FormFieldsAccountUpdateProfilePicture>({ | ||
resolver: zodResolver(zFormFieldsAccountUpdateProfilePicture()), | ||
values: { | ||
avatarFileId: '', | ||
}, | ||
}); | ||
|
||
const avatarUpload = useUploadFile({ | ||
route: 'avatar', | ||
onError: (error) => { | ||
form.setError('avatarFileId', { | ||
message: error.message || 'An error occurred', | ||
}); | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Form field never updated after upload.
The form field avatarFileId
is initialized to an empty string (line 46) and validated as a non-empty string by the schema, but the file input's onChange
handler (lines 124-132) never updates this field value after a successful upload. This means the form will always be invalid, preventing submission even after a successful upload.
Apply this diff to update the field after successful upload:
const avatarUpload = useUploadFile({
route: 'avatar',
+ onSuccess: (file) => {
+ form.setValue('avatarFileId', file.objectKey, {
+ shouldValidate: true,
+ });
+ },
onError: (error) => {
form.setError('avatarFileId', {
message: error.message || 'An error occurred',
});
},
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const form = useForm<FormFieldsAccountUpdateProfilePicture>({ | |
resolver: zodResolver(zFormFieldsAccountUpdateProfilePicture()), | |
values: { | |
avatarFileId: '', | |
}, | |
}); | |
const avatarUpload = useUploadFile({ | |
route: 'avatar', | |
onError: (error) => { | |
form.setError('avatarFileId', { | |
message: error.message || 'An error occurred', | |
}); | |
}, | |
}); | |
const avatarUpload = useUploadFile({ | |
route: 'avatar', | |
onSuccess: (file) => { | |
form.setValue('avatarFileId', file.objectKey, { | |
shouldValidate: true, | |
}); | |
}, | |
onError: (error) => { | |
form.setError('avatarFileId', { | |
message: error.message || 'An error occurred', | |
}); | |
}, | |
}); |
🤖 Prompt for AI Agents
In src/features/account/change-profile-picture-drawer.tsx around lines 43 to 57,
the form field avatarFileId is initialized but never updated after a successful
file upload; modify the upload success handler to set the form field value
(e.g., call form.setValue('avatarFileId', <uploadedFileId>) with the ID returned
by the upload) and clear any previous form errors for avatarFileId so the form
becomes valid and submittable after upload.
createdAt: z.date(), | ||
updatedAt: z.date(), | ||
onboardedAt: z.date().nullish(), | ||
avatarFileId: z.string().nullish(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify migration and breaking change impact.
The addition of avatarFileId
(and removal of image
per the summary) is a breaking change to the public User
type. Ensure:
- A database migration exists to handle the schema change
- All references to the old
image
field have been updated
Run the following script to verify the migration and check for any remaining image
field references:
🏁 Script executed:
#!/bin/bash
# Description: Verify Prisma migration exists and check for stale image field usage
echo "==> Check for Prisma migration adding avatarFileId:"
fd -e sql . prisma/migrations --exec grep -l "avatarFileId" {} \;
echo -e "\n==> Check for any remaining references to user.image or user?.image:"
rg -n --type ts '\buser\.image\b|\buser\?\.\bimage\b' -g '!*.test.ts' -g '!*.spec.ts'
echo -e "\n==> Check User type usage in select/include statements:"
rg -n --type ts "image:" -C 2 | grep -A2 -B2 "select\|include"
Length of output: 432
🏁 Script executed:
#!/bin/bash
# Locate Prisma schema file
echo "==> Locate schema.prisma:"
fd schema.prisma
# Search for avatarFileId in Prisma schema
echo -e "\n==> Search for avatarFileId in schema.prisma:"
rg -n "avatarFileId" $(fd schema.prisma)
# Locate any migrations directories
echo -e "\n==> Locate 'migrations' directories:"
fd -t d migrations
# Search for avatarFileId in migration files
echo -e "\n==> Search for avatarFileId in migration files:"
for dir in $(fd -t d migrations); do
rg -n "avatarFileId" "$dir" || true
done
# Search for avatarFileId across entire repo
echo -e "\n==> Search for avatarFileId across repo:"
rg -n "avatarFileId" || true
# Search for any remaining 'image' references (excluding tests)
echo -e "\n==> Search for 'user.image' or 'image:' references (excluding tests/specs):"
rg -n "(\.image\b|\?\.\image\b|image\s*:)" -g '!*.test.*' -g '!*.spec.*' || true
Length of output: 1773
🏁 Script executed:
#!/bin/bash
echo "==> Search for '.image' or '?.image' usages (excluding tests):"
rg -n "\.image\b|\?\.\image\b" -g '!*.test.*' -g '!*.spec.*' || echo "No .image patterns found"
echo -e "\n==> Search for keys named 'image:' (excluding tests):"
rg -n "image\s*:" -g '!*.test.*' -g '!*.spec.*' || echo "No 'image:' keys found"
Length of output: 417
🏁 Script executed:
#!/bin/bash
echo "==> Searching for '.image' or '?.image' in TS/TSX (excluding tests/specs):"
rg -nP '(\.|\?\.)image\b' --glob '*.ts' --glob '*.tsx' --glob '!*.test.ts' --glob '!*.test.tsx' --glob '!*.spec.ts' --glob '!*.spec.tsx' || echo "No .image usages found"
echo -e "\n==> Searching for object keys 'image:' in TS/TSX (excluding tests/specs):"
rg -nP 'image\s*:' --glob '*.ts' --glob '*.tsx' --glob '!*.test.ts' --glob '!*.test.tsx' --glob '!*.spec.ts' --glob '!*.spec.tsx' || echo "No object key 'image:' found"
Length of output: 264
Add migration for avatarFileId and update lingering image reference.
- No Prisma migration detected for the new
avatarFileId
field; create and commit one. - Replace
session.data?.user.image
withsession.data?.user.avatarFileId
in src/layout/manager/nav-user.tsx (line 52).
🤖 Prompt for AI Agents
In src/features/user/schema.ts around line 27 and across the schema/migrations,
the new avatarFileId field (avatarFileId: z.string().nullish()) was added but no
Prisma migration was created; generate a Prisma migration that adds the
avatarFileId column to the corresponding User model in schema.prisma and
run/commit the migration files. Also update the lingering reference in
src/layout/manager/nav-user.tsx at line 52 to use
session.data?.user.avatarFileId instead of session.data?.user.image so the UI
reads the new field. Ensure tests/build succeed and commit both the migration
and the nav-user.tsx change together.
import { minio } from 'better-upload/server/helpers'; | ||
|
||
import { envServer } from '@/env/server'; | ||
|
||
export const s3client = minio({ | ||
endpoint: envServer.S3_ENDPOINT, | ||
accessKeyId: envServer.S3_ACCESS_KEY_ID, | ||
secretAccessKey: envServer.S3_SECRET_ACCESS_KEY, | ||
region: envServer.S3_REGION, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify dual S3 client usage pattern.
This file exports an S3 client using better-upload/server/helpers
(MinIO wrapper), while src/lib/s3/index.ts
exports a separate S3 client using the AWS SDK. Having two S3 clients could lead to confusion about which to use where.
Run the following script to check how each client is used:
If both clients serve distinct purposes (e.g., better-upload
for upload-specific helpers, AWS SDK for general operations), consider adding a comment in each file explaining when to use which client.
🏁 Script executed:
#!/bin/bash
# Description: Check usage of both S3 clients to verify they serve different purposes
echo "==> Usage of better-upload s3client (object-storage):"
rg -n --type ts "from '@/lib/object-storage'" -A 2 -B 1
echo -e "\n==> Usage of AWS SDK s3client (s3):"
rg -n --type ts "from '@/lib/s3'" -A 2 -B 1
echo -e "\n==> Check if both are used in same files:"
rg -l --type ts "from '@/lib/object-storage'" | while read -r f; do
if rg -q "from '@/lib/s3'" "$f"; then
echo "Both clients imported in: $f"
fi
done
Length of output: 538
Remove or integrate unused MinIO client
The s3client
exported from src/lib/object-storage/index.ts
isn’t imported anywhere. Either update your upload/account logic to use this MinIO‐based client or remove the unused module to avoid dead code.
🤖 Prompt for AI Agents
In src/lib/object-storage/index.ts lines 1-10, the exported s3client (MinIO
wrapper) is not used anywhere; either remove this unused module to eliminate
dead code or integrate it by updating the upload/account code paths to import
s3client and use it for S3/MinIO operations. If integrating, replace current
direct S3 calls (or mock clients) with imports from this module, ensure the
envServer values are correct and passed, add any necessary error handling/retry,
and update tests and imports; if removing, delete the file and any references
and run tests/linters to confirm no breakage.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/routes/api/upload.ts (1)
18-20
: Add common image formats and reduce max file size.The current configuration is missing common formats and has an excessive size limit:
- Missing formats:
image/gif
andimage/avif
are common avatar formats. AVIF in particular offers superior compression.- Size limit: 100MB is excessive for avatars. Typical avatar uploads are 5-10MB maximum, and reasonable limits improve UX and reduce resource usage.
Apply this diff to add missing formats and reduce the limit:
avatar: route({ - fileTypes: ['image/png', 'image/jpeg', 'image/webp'], - maxFileSize: 1024 * 1024 * 100, // 100Mb + fileTypes: ['image/png', 'image/jpeg', 'image/webp', 'image/gif', 'image/avif'], + maxFileSize: 1024 * 1024 * 10, // 10MB onBeforeUpload: async ({ req, file }) => {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/routes/api/upload.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-02-21T08:38:40.338Z
Learnt from: yoannfleurydev
PR: BearStudio/start-ui-web#443
File: src/lib/s3/config.ts:13-13
Timestamp: 2025-02-21T08:38:40.338Z
Learning: In the avatar upload feature, the S3 key is intentionally set to `avatars/${user.id}` to ensure that new avatar uploads overwrite the previous one, preventing storage bloat and simplifying avatar management.
Applied to files:
src/routes/api/upload.ts
🧬 Code graph analysis (1)
src/routes/api/upload.ts (4)
src/server/router.ts (2)
router
(11-16)Router
(8-8)src/lib/s3/index.ts (1)
s3client
(5-12)src/env/server.ts (1)
envServer
(9-44)src/server/auth.tsx (1)
auth
(23-100)
🪛 GitHub Check: 🧹 Linter
src/routes/api/upload.ts
[warning] 26-26:
Complete the task associated to this "TODO" comment
🔇 Additional comments (2)
src/routes/api/upload.ts (2)
1-12
: LGTM!The imports are appropriate and align with the better-upload server integration and TanStack Router pattern.
44-50
: LGTM!The route definition follows TanStack Router patterns correctly, with the server-side POST handler properly configured to delegate to
better-upload
's request handler.
// The message error thrown here is visible to the user | ||
// as a field error message. | ||
// [TODO] Check to send i18n key so the message can be translated | ||
throw new RejectUpload('Not logged in!'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete the TODO: Implement i18n for error messages.
The error message 'Not logged in!'
is hardcoded and visible to users as a field error. This prevents localization and is flagged by the TODO comment. Better-upload surfaces these messages in the UI, so they should be translatable.
Would you like me to help implement an i18n solution for error messages in upload validation? I can suggest a pattern that integrates with your existing translation system.
🧰 Tools
🪛 GitHub Check: 🧹 Linter
[warning] 26-26:
Complete the task associated to this "TODO" comment
🤖 Prompt for AI Agents
In src/routes/api/upload.ts around lines 24-27, replace the hardcoded error
message thrown to users with a translatable identifier: instead of throw new
RejectUpload('Not logged in!') emit a structured error that carries an i18n key
(e.g. 'upload.not_logged_in') and optional defaultMessage; update RejectUpload
to accept { i18nKey: string, defaultMessage?: string } (or add an i18nKey
property) so the frontend can resolve translations, and keep the defaultMessage
as a fallback for untranslated locales.
const fileExtension = file.type.split('/').at(-1) as string; | ||
|
||
return { | ||
// Here we want to keep the filename always the same | ||
// Each avatar upload will erase the previous one. | ||
objectInfo: { | ||
key: `${session.user.id}/avatar.${fileExtension}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Object key includes extension, preventing overwrites.
The current key pattern ${session.user.id}/avatar.${fileExtension}
includes the file extension, which means uploading a different format (e.g., PNG then WebP) creates separate objects (userId/avatar.png
, userId/avatar.webp
) instead of overwriting. This defeats the stated intent in lines 33-34 and contradicts the learning that keys should be deterministic to prevent storage bloat.
Additionally, line 30 extracts the extension without validating that file.type
is well-formed, risking undefined behavior if the type is malformed.
Apply this diff to use a format-agnostic key that ensures overwrites:
- const fileExtension = file.type.split('/').at(-1) as string;
-
return {
// Here we want to keep the filename always the same
// Each avatar upload will erase the previous one.
objectInfo: {
- key: `${session.user.id}/avatar.${fileExtension}`,
+ key: `avatars/${session.user.id}`,
},
+ metadata: {
+ userId: session.user.id,
+ contentType: file.type,
+ },
};
Based on learnings
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const fileExtension = file.type.split('/').at(-1) as string; | |
return { | |
// Here we want to keep the filename always the same | |
// Each avatar upload will erase the previous one. | |
objectInfo: { | |
key: `${session.user.id}/avatar.${fileExtension}`, | |
// (Remove the now-unused fileExtension extraction) | |
return { | |
// Here we want to keep the filename always the same | |
// Each avatar upload will erase the previous one. | |
objectInfo: { | |
key: `avatars/${session.user.id}`, | |
}, | |
metadata: { | |
userId: session.user.id, | |
contentType: file.type, | |
}, | |
}; |
🤖 Prompt for AI Agents
In src/routes/api/upload.ts around lines 30 to 36, the object key currently
includes the file extension which prevents overwrites across different formats
and the file.type split isn't validated; change the key to a format-agnostic
deterministic value (e.g., `${session.user.id}/avatar`) so every upload replaces
the previous object, and stop deriving extension for the storage key;
additionally, validate file.type before using it (guard against undefined or
malformed values) and, if needed, extract and use the content type only for the
upload metadata (not the key), falling back to a safe default or returning a 400
error when the MIME type is invalid.
Describe your changes
Setup basic file upload on startui
Screenshots
CleanShot.2025-09-12.at.15.30.32.mp4
In the db, we store the path to the uploaded file (here an avatar)

Checklist
pnpm storybook
command and everything is working(If you cannot update the french language, just let us know in the PR description)
Summary by CodeRabbit
New Features
Localization
Chores