Artificial intelligence (AI) platforms in East Asia often elicit privacy concern yet sustain user participation. This study interprets the pattern as bounded compliance—a satisficing equilibrium in which engagement persists once minimum transparency and reliability thresholds are perceived in platform governance. A symmetric adult survey in Fujian, China (N = 185) and Busan–Gyeongnam, Korea (N = 187) examines how accountability visibility and privacy concern jointly shape platform trust and use. Heat-map diagnostics and logit marginal effects show consistently high willingness (≥0.70) across conditions, with stronger accountability sensitivity in Korea and stronger continuity assurance in China. Under high concern, willingness converges to a “good-enough” zone where participation endures despite discomfort. The findings highlight governance thresholds as practical levers for trustworthy AI: enhancing feedback visibility (e.g., case tracking, resolution proofs) and maintaining institutional continuity (e.g., O&M capacity, incident-response coverage) can sustain public confidence in AI-enabled public-service platforms.