The KKT method is applicable to nonlinear constraints, but not equally applicable. Like Dominique said, you need a constraint qualification to be satisfied for the KKT conditions to be necessary.
For KKT conditions to be sufficient to characterize the solution of a maximization problem, your objective function needs to be concave, and you need the constraints to be convex, that is, you have to be able to write them as
$g(x) \leqslant 0$
where $g: \mathbb{R}^n \to \mathbb{R}$ is a convex function. For minimization problems, you want the objective function to be convex.
Boyd and Vandenberghe have a very good book on this that is freely available, accompanied by video lessons taught by Stephen Boyd. I have some short notes too that are a good complement or introduction to a more rigorous or extensive treatment. For the nonconvex case, I really like the treatment in the book "Numerical Optimization", by Nocedal and Wright.
Advice: make sure you understand the geometry of the case with inequality constraints. The continuously differentiable case will be easier. My notes have a picture and a very brief discussion on this if you want a quick treatment. You need to know a few things to make sense of it though: mainly that any direction that makes an obtuse (> 180 degrees) angle with the gradient of a continuously differentiable function is a direction of descent. The rest is linear algebra.