This paper introduces an enhanced version of Dr.Scratch, a web-based tool for automatically assessing Compu-tational Thinking (CT) skills evident in visual programmingprojects. We propose a fuzzy logic-based scoring framework toaddress limitations in existing rule-based assessment systems.The paper reviews relevant prior initiatives, details the analyticalframework employed to interpret Scratch code, and elucidatesthe computational aspects considered when deriving a CT scorefrom user-created artifacts. Our methodology integrates fuzzyinference to generate continuous, explainable scores across coreCT dimensions. Experimental analysis of over 250 Scratchprojects demonstrates that the fuzzy scoring model providesfiner granularity, better alignment with educator evaluation, andimproved interpretability compared to deterministic approaches.We present preliminary findings from our investigation, discussfuture directions, and address current limitations in automatededucational assessment. The contributions of this work advancethe field of Educational AI toward more nuanced, pedagogicallygrounded computational thinking assessment.