Despite significant advances in deep neural networks across diverse domains, challenges persist in safety-critical contexts, including domain shift sensitivity and unreliable uncertainty estimation. To address these issues, this study investigates Bayesian learning for uncertainty handling in modern neural networks. However, the high-dimensional, non-convex nature of the posterior distribution poses practical limitations for epistemic uncertainty estimation. The Laplace approximation, as a cost-efficient Bayesian method, offers a practical solution by approximating the posterior as a multivariate normal distribution but faces computational bottlenecks in precise covariance matrix computation and storage. This research employs subnetwork inference, utilizing only a subset of the parameter space for Bayesian inference. In addition, a Kronecker-factored and low-rank representation is explored to reduce space complexity and computational costs. Several corrections are introduced to converge the approximated curvature to the exact Hessian matrix. Numerical results demonstrate the effectiveness and competitiveness of this method, whereas qualitative experiments highlight the impact of Hessian approximation granularity and parameter space utilization in Bayesian inference on mitigating overconfidence in predictions and obtaining high-quality uncertainty estimates.