You can accomplish this with a unique index on a view that excludes rows where A and B are both null:CREATE TABLE tbl(a int null, b int null, otherCol varchar(10))GOcreate view dbo.tbl_unq with schemabinding asselect a,b,otherCol FROM dbo.tblWHERE not (a is null AND b is null)GOcreate unique clustered index unq_tbl_unqon dbo.tbl_unq(a,b)goinsert tbl values(1,1,'a') --yesinsert tbl values(1,2,'a') --yesinsert tbl values(1,1,'a') --dupeinsert tbl values(null,1,'a')--yesinsert tbl values(null,1,'a')--dupeinsert tbl values(null,null,'a')--yesinsert tbl values(null,null,'a')--yes
This avoids the overhead of the check constraint, however if the view is dropped you lose the constraint.Is there any specific business reason why you need those columns to be null, and both at the same time? As you mentioned it's a fundamental conflict with being unique, so which facet is more important?