Wednesday, January 21, 2015

SqlServer AD Login Auditing with nested group parsing

Simple script to help determine who has access to your SQL Server and through what groups with some generic SA rights Checking. This requires the DSquery Tools(Windows Feature ) to be installed to allow querying against the AD server as well as the permissions to run both XP_LoginInfo and xp_cmdshell.

    
  declare @sql Varchar(max)  
    ,@ADName Varchar(max)  
    ,@LoginType Varchar(Max)  
    ,@cmd varchar(2000)  
    ,@DomainPrefix Varchar(max)  
 
 ---------------------SETTINGS------------------------------------  
  -- Domain Prefix: for the Account Mydomain\Mylogin the domain Prefix would be the Mydomain\ piece.Include the \  
  set @DomainPrefix = 'FHLBDM\'  
 -----------------------------------------------------------------  
 
  declare     @BadLogins table (  
    Name varchar(max)  
    )  
  Declare @GroupPerms table (            
       [AccountName] [varchar](max) NULL,  
      [Type] [varchar](max) NULL,  
      [Privelege] [varchar](max) NULL,  
      [Mapped_Login_name] [varchar](max) NULL,  
      [Permission_Path] [varchar](max) NULL  
    )  
  declare @GroupUsers table(  
      [AccountName] [varchar](max) NULL,  
      [Type] [varchar](max) NULL,  
      [Privelege] [varchar](max) NULL,  
      [Mapped_Login_name] [varchar](max) NULL,  
      [Permission_Path] [varchar](max) NULL  
    )  
 
  Declare @nestedTemp table(  
      Value varchar(max)  
      )                 
     
  declare LoginCursor Cursor for  
  SELECT name,type  
   FROM sys.server_principals   
   where type in ('u','g')  
 
    
  open LoginCursor  
  Fetch next from LoginCursor into @ADName,@LoginType  
 
  while @@FETCH_STATUS = 0  
  begin  
   If @LoginType = 'u'  
   Begin  
    set @sql = 'EXEC xp_logininfo '''+ @ADName + ''''  
    Print (@Sql)  
    Begin Try  
      Insert @GroupUsers  
       exec (@sql)  
    End Try  
    Begin Catch  
      insert @BadLogins   
       select @ADName  
    end Catch       
   end  
   --If Principal is a group than determine if the group is part of admin group  
   Else if @LoginType = 'g'  
   Begin  
    set @sql = 'EXEC xp_logininfo '''+ @ADName + ''',''all'''  
    Print (@Sql)  
    begin try  
      delete @GroupPerms  
      Insert @GroupPerms  
      exec (@sql)  
      
    end try  
    begin catch  
      insert @BadLogins   
       select @ADName  
    end catch  
    --Nested Group Parsing of all users(Requires DSQuery Installed)  
    set @Cmd = 'dsquery group -samid "'+replace(@ADName,@DomainPrefix,'')+'" |dsget group -members -expand -l | dsget user -samid -c -l'  
    Print (@Cmd)  
     
    Delete @nestedTemp  
    insert @nestedTemp  
    exec xp_cmdshell @command_String = @cmd       
     
    If exists( select * from @nestedTemp where Value like '%is not recognized as an internal or external command%')  
    begin  
      Raiserror('DSQUERY not enabled on system. Cannot parse nested groups. Install remote admin tools via windows features',11,11)  
    end  
    else  
    begin   
      Insert @GroupUsers  
       Select Replace(nt.Value,'samid: ',@DomainPrefix) Account  
         ,gp.Type  
         ,GP.Privelege  
         ,GP.Mapped_Login_name  
         ,@ADName Permission_Path  
       from @nestedTemp nt  
       left join @GroupPerms gp  
       on 1=1  
       where Value like 'samid:%'  
    end  
   End       
   Fetch next from LoginCursor into @ADName,@LoginType  
  end  
  deallocate LoginCursor  
   
  --Return List of Accounts that have Access and how they have that access  
  --AccountName is the Windows Name(Could be local or Domain)  
  --Type dictates if account accesses the server via a group or direct access  
  --Privelege simply states Either admin or user. Admin indicates they have ServerAdmin(SA) rights  
  --MappedLoginName and Permission_Path are tpyically the same in the case of group membership.   
  --For direct access via a user account the permission path will be null.   
  Select *  
    from @GroupUsers  
  order by AccountName  
   
  --Accounts/Groups can be removed from AD/LocalCOmputer without SQL server being aware.   
  --For this reason we store those Bad records in this table and report them  
  Select Name as 'Invalid Groups or logins(Deleted or renamed?)' from @BadLogins  
   
 

Tuesday, April 9, 2013

Listing XML Elements and attribute names with XqueryPaths

Been a while since I made a post. I have been quite busy with the new job and resurrected a piece of code that I thought might be useful to someone else. The following code will list the Name of each entity in the xml doc and the compatible X-Query path.

DECLARE @xml XML
SET @xml ='<?xml version="1.0"?>
<dtsconfiguration>
 <dtsconfigurationheading>
  <dtsconfigurationfileinfo GeneratedBy="Generic" GeneratedFromPackageName="Generic" GeneratedFromPackageID="{AA722E45-EE85-4C19-9BFA-8776E1A7463B}" GeneratedDate="7/31/2019 11:27:59 AM"/>
 </DTSConfigurationHeading>
 <configuration ConfiguredType="Property" Path="\Package.Connections[Test].Properties[ConnectionString]" ValueType="String">
  <configuredvalue>Data Source=Local;Initial Catalog=Tester;Provider=SQLNCLI10.1;Integrated Security=SSPI;Auto Translate=False;</ConfiguredValue>
 </Configuration>
 <configuration ConfiguredType="Property" Path="\Package.Connections[Test].Properties[ServerName]" ValueType="String">
  <configuredvalue>local</ConfiguredValue>
 </Configuration>
</DTSConfiguration>'
;
WITH CTE
AS (
 SELECT node.value('fn:local-name(.)', 'varchar(max)') NAME
  ,'/' + node.value('fn:local-name(.)', 'varchar(max)') Path
  ,node.query('*') Children
  ,Convert(varchar(max),'') Parent
  ,0 as depth
  FROM @xml.nodes('*') AS Root(node)
 
 UNION ALL
 
 SELECT node.value('fn:local-name(.)', 'varchar(max)') NAME
  ,Parent+'/' +cte.NAME + '/' + node.value('fn:local-name(.)', 'varchar(max)') Path
  ,node.query('*') Children
  ,Parent + '/'+ NAME  Parent
  ,depth +1
 FROM cte
 CROSS APPLY cte.children.nodes('*') AS Child(node)
 
 UNION ALL
 
 SELECT '@'+node.value('fn:local-name(.)', 'varchar(max)') NAME
  ,Parent+'/' +CTE.NAME + '/'+node.value('fn:local-name(../.)', 'varchar(max)')+  '/@' + node.value('fn:local-name(.)', 'varchar(max)') Path
  ,'' Children
  ,Parent + '/'+ NAME Parent
  ,depth +1
 FROM cte
 CROSS APPLY cte.children.nodes('/*/@*') AS Child(node)
 )
SELECT *
FROM cte
order by depth
 
 

Thursday, May 24, 2012

Distinct Values based on a single column

Distinct is a great tool! However, sometimes I find myself trying to retrieve records that are distinct in repsect to a single column. consider the folowing.
SELECT c1 , c2 
FROM (
SELECT 1 c1,'a' c2
UNION ALL
SELECT 1,'d'
UNION ALL
SELECT 2,'b'
UNION ALL
SELECT 3,'c'
UNION ALL
SELECT 4,'e'
UNION ALL
SELECT 4,'f'
) q1
This will return the rowset
c1 c2  
1  a
1  d
2  b
3  c
4  f
4  e
Well what if I only wanted to return distinct values based on c1 so that my result set looks like this:
c1 c2   
1  d
2  b
3  c
4  f
Distinct wont Quite get the job done in that case since even though c1 has repeated values the other rows in the data set make the row as a whole unique.

Review the following
SELECT c1 , c2 , ROW_NUMBER() over (PARTITION by c1 order by RAND() ) RowNum
FROM (
SELECT 1 c1,'a' c2
UNION ALL
SELECT 1,'d'
UNION ALL
SELECT 2,'b'
UNION ALL
SELECT 3,'c'
UNION ALL
SELECT 4,'e'
UNION ALL
SELECT 4,'f'
) q1
In this case we are setting up a rownumber for each row. The parition by statement makes the number specific to column specified. this will return the following result set
c1 c2 RowNum
1  d  1
1  a  2
2  b  1
3  c  1
4  f  1
4  e  2
so as you can see subsequent rows that have duplicate c1 values have a value for rownum greater than 1. If we wrap this in a outerquery such as
select * from (

SELECT c1 , c2 , ROW_NUMBER() over (PARTITION by c1 order by RAND() ) RowNum
FROM (
SELECT 1 c1,'a' c2
UNION ALL
SELECT 1,'d'
UNION ALL
SELECT 2,'b'
UNION ALL
SELECT 3,'c'
UNION ALL
SELECT 4,'e'
UNION ALL
SELECT 4,'f'
) q1
) q2 where RowNum <2
we get this :
c1 c2 RowNum
1  d  1
2  b  1
3  c  1
4  f  1
which is exactly what we were looking for!

Friday, April 20, 2012

Powershell Detecting DB state(Mirroring, Primary,Failover)

This script pulls back the status of your DBs on a selected server. Its just a base script; modify as needed.

$server = "dwhtest-new"
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $server
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$dbs = $srv.Databases
foreach ($db1 in $dbs)
{
$db = New-Object Microsoft.SqlServer.Management.Smo.Database 
$db = $db1
$DatabaseName = $db.Name
Write-Host $DatabaseName
Write-Host "MirroringStatus:" $db.MirroringStatus
Write-Host "DBState:"  $db.Status
Write-Host
}     
If your DB's mirroring is still intact you will recieve 'Synchronized' for MirroringStatus and its its the Primary it will say "Normal" for the status and if its the failover it will say "Restoring". Unfortunately there is no way, that im aware of, to just pull out the status of "Mirror" or "principle". You will jsut have to build logic to check both fo those values. Restoring

String Concatenation

Concatenate multiple rows into a comma delimited string.


declare @table table (
id int
,val varchar(2)
)
insert @table
Values (1 ,'a'),
(1,'b'),
(1,'c'),
(1,'d'),
(2,'a'),
(2,'b'),
(2,'c'),
(3,'a'),
(3,'b')


select id, stuff((
select (','+val) from @table q1
where q1.id = q2.id
for xml path('')),1,1,'') val
from @table q2
group by q2.id

Friday, April 13, 2012

String Split in SQL Server (No storeproc or looping)

Common problem in sql server is manipulating strings. Whether this involves converting multiple rows of data into one row or splitting them there is a multitude of examples of either. Having some wextensive experience with manipulating xml directly in sql server in set based fashion, I decided to take a diffrent approach to the splitting of strings.
Consider the following example:
.
declare @String varchar(200)
set @string = 'test,test2,test3'

select @string
This would simply return a single row with the string showing as it was set. However, what if you wanted the query to return something like this.
row1: test
row2: test2
row3: test3
well it turns out it can be done pretty simply. The first step is constructing your delimited list into a roughly formated xml document. You can then use standard xquery syntax to shred that document into its row by row element values. Below is the finished code example:

declare @TableList varchar(100)
set @TableList = 'test,test2,test3'
--show what xml looks like for example purpose only.
select Convert(xml,('<R>'+replace(@TableList,',','</R><R>')+'</R>')) as xmldoc

select t.value('.','varchar(200)')
    from (select convert(xml,('<R>'+replace(@TableList,',','</R><R>')+'</R>')) as xmldoc)  as a
    Cross apply a.xmldoc.nodes('./R') as b(t)

Tuesday, February 14, 2012

Database Scripting and powershell (Copying Backups)

This is the second part to my series of posts describing how I automated our Production –> Staging database restores. This part is still fairly early on in the process and was basically for me to hone my powershell skills , anyways onto the script:

Function GetBackupFromFilesystem
{
    param
    (
        [string]$Source,
        [string]$Destination 
    )
    #Debug Info
    #Write-host "destination:$destination\$i"
    #Write-host "source"= $source
    #####
    foreach( $i in (ls $source )){ }
    if ((Test-Path -path ($destination +"\"+$i.Name)) -eq $True)
    {
        Write-host '**********************************************************************************'
        write-host ("File:"+ $destination+ "\"+$i.Name +" already exists")
        Write-host '**********************************************************************************'
        return @( "$($i.name)")
        exit
    }
    ls $destination | where {$_.name -like "*.bak"} | rm -force
    Write-host "DestinationFilesystem"
    Write-Host $destination
    #write-host "$i.path"
    Copy-Item  ($source + '\' +$i.name)  ($Destination + '\' +$i.name)
    return @( $i.name)
}

 

The script is fairly simple. It takes the Source directory and the target directory. It then grabs the most recent file and copies that to the destination drive. However it only copies it if the file doesn’t already exist. It only uses a filename at this time to determine if the file already exists. Eventually ill do some additional checking but for now this satisfies our requirement. Also of note, It grabs the most recent file assuming you are using the standard backup naming convention for sql server 2008 r2. The standard convention appends the date to the backup filename so more recent backups end up at the bottom of the directory listing and hence the most recent backup will be the last file. If you do something different then you will have to change the logic of the “ foreach( $i in (ls $source )){ }” line to ensure your proper directory listing shows the most recent as last. The function will then return the filename to the calling procedure. This allows you to perform further operations on that file without having to get the name again.

There is a downside to this script however. There is no status output using the Copy-Item method. You get no feedback about the progress of the file copy and in many cases with large backups you are staring at a console that could be doing nothing or it could be copying the file. you simply don’t know.The following is a function that will give you a status output of how the file copy is going. It continually gives you the percentage of the file copy that has completed. I believe I lifted a good portion of this logic from somewhere though now the source escapes me.

function Copy-File {
    param( [string]$Source, [string]$Destination)
    $Sourcefile = [io.file]::OpenRead($Source)
    $Destinationfile = [io.file]::OpenWrite($Destination)
    Write-Progress -Activity "Copying file " -status "$Source -> $Destination" -PercentComplete 0
    try {
        [byte[]]$buffer = new-object byte[] 4096
        [Long]$total = [Long]$count = 0
        do {
            $count =  $Sourcefile.Read($buffer, 0, $buffer.Length)
            $Destination.Write($buffer, 0, $count)
            $total += $count
            if ($total % 1mb -eq 0) {
                Write-Progress -Activity "Copying file" -status "$Source -> $Destination" `
                   -PercentComplete ([int]($total/$Sourcefile.Length* 100))
            }
        } while ($count -gt 0)
    }
    finally {
        $Sourcefile.Close()
        $Destination.Close()
    }
}

There's a problem with this method however. If you are moving files locally its just as fast as the built-in copy-item. However if your copying from one network source to another network source as a build server might do then you incur a heavy IO cost. The file buffer must be filled locally then written out to the destination network file. This creates a network bottleneck. As such I don’t actually use this function in my scripts but have included it because it’s a handy piece of scripting and might have a use for your application.